vtrend.org
HTTP/1.1 200 OK 服务器:nginx 访问时间:2016年03月07日 01:55:45 类型:text/html Transfer-Encoding: chunked 连接:关闭 动作:Accept-Encoding 过期时间:2016年03月07日 01:55:44 缓存控制:不缓存 Content-Encoding: gzip
clr3AboutJustin Cheng is a PhD student in Computer Science at Stanford University, advised by Prof. Jure Leskovec and Prof. Michael Bernstein. He's broadly interested in social networks and social computing.Also, he really likes chocolate and ramen, among other edible delights.jc14...@cs.stanford.edutwitter ·githubResearch2015Measuring Crowdsourcing Effort with Error-Time CurvesCrowdsourcing systems lack effective measures of the effort required to complete each task. Without knowing how much time workers need to execute a task well, requesters struggle to accurately structure and price their work.Objective measures of effort could better help workers identify tasks that are worth their time.We propose a data-driven effort metric, ETA (error-time area), that can be used to determine a task's fair price.It empirically models the relationship between time and error rate by manipulating the time that workers have to complete a task.ETA reports the area under the error-time curve as a continuous metric of worker effort.The curve's 10th percentile is also interpretable as the minimum time most workers require to complete the task without error, which can be used to price the task.We validate the ETA metric on ten common crowdsourcing tasks, including tagging, transcription, and search, and find that ETA closely tracks how workers would rank these tasks by effort.We also demonstrate how ETA allows requesters to rapidly iterate on task designs and measure whether the changes improve worker efficiency.Our findings can facilitate the process of designing, pricing, and allocating crowdsourcing tasks.PDFCheng, J., Teevan, J. & Bernstein, M.S. (2015). Measuring Crowdsourcing Effort with Error-Time Curves. To appear at CHI 2015.Break It Down: A Comparison of Macro- and MicrotasksA large, seemingly overwhelming task can sometimes be transformed into a set of smaller, more manageable microtasks that can each be accomplished independently. In crowdsourcing systems, microtasking enables unskilled workers with limited commitment to work together to complete tasks they would not be able to do individually. We explore the costs and benefits of decomposing macrotasks into microtasks for three task categories: arithmetic, sorting, and transcription. We find that breaking these tasks into microtasks results in longer overall task completion times, but higher quality outcomes and a better experience that may be more resilient to interruptions. These results suggest that microtasks can help people complete high quality work in interruption-driven environments. PDFCheng, J., Teevan, J., Iqbal, S. T. & Bernstein, M.S. (2015). Break It Down: A Comparison of Macro- and Microtasks. To appear at CHI 2015.Flock: Hybrid Crowd-Machine Learning Classifiers Hybrid crowd-machine learning classifiers are classification models that start with a written description of a learning goal, use the crowd to suggest predictive features and label data, and then weigh these feat
© 2010 - 2020 网站综合信息查询 同IP网站查询 相关类似网站查询 网站备案查询网站地图 最新查询 最近更新 优秀网站 热门网站 全部网站 同IP查询 备案查询
2026-02-01 20:49, Process in 0.0060 second.