网站综合信息 www.vtrend.org
    • 标题:
    • Justin Cheng / clr3 
    • 关键字:
    •  
    • 描述:
    •  
    • 域名信息
    •   
    • 备案信息
    • 备案号: 
    网站收录SEO数据
    • 搜索引擎
    • 收录量
    • 反向链接
    • 其他
    • Google
    • 0  
    • 0  
    • pr:0  
    • 雅虎
    • 0  
    •  
    •  
    • 搜搜
    • 0  
    •  
    •  
    • 搜狗
    • 0  
    •  
    • 评级:0/10  
    • 360搜索
    • 0  
    •  
    •  
    域名流量Alexa排名
    •  
    • 一周平均
    • 一个月平均
    • 三个月平均
    • Alexa全球排名
    • 14,405,772  
    • 平均日IP
    • 日总PV
    • 人均PV(PV/IP比例)
    • 反向链接
    • dmoz目录收录
    • -  
    • 流量走势图
    域名注册Whois信息

    vtrend.org


    获取时间: 2016年03月07日 01:51:26
    NOT FOUND
    >>> Last update of WHOIS database: 2016-03-06T17:54:47Z <<<

    Access to Public Interest Registry WHOIS information is provided to assist persons in determining the contents of a domain name registration record in the Public Interest Registry registry database. The data in this record is provided by Public Interest Registry for informational purposes only, and Public Interest Registry does not guarantee its accuracy. This service is intended only for query-based access. You agree that you will use this data only for lawful purposes and that, under no circumstances will you use this data to(a) allow, enable, or otherwise support the transmission by e-mail, telephone, or facsimile of mass unsolicited, commercial advertising or solicitations to entities other than the data recipient's own existing customers; or (b) enable high volume, automated, electronic processes that send queries or data to the systems of Registry Operator, a Registrar, or Afilias except as reasonably necessary to register domain names or modify existing registrations. All rights reserved. Public Interest Registry reserves the right to modify these terms at any time. By submitting this query, you agree to abide by this policy.
    其他后缀域名
    • 顶级域名
    • 相关信息
    网站首页快照(纯文字版)
    抓取时间:2016年03月07日 01:51:23
    网址:http://www.vtrend.org/
    标题:Justin Cheng / clr3
    关键字:
    描述:
    主体:
    clr3AboutJustin Cheng is a PhD student in Computer Science at Stanford University, advised by Prof. Jure Leskovec and Prof. Michael Bernstein. He's broadly interested in social networks and social computing.Also, he really likes chocolate and ramen, among other edible delights.jc14...@cs.stanford.edutwitter ·githubResearch2015Measuring Crowdsourcing Effort with Error-Time CurvesCrowdsourcing systems lack effective measures of the effort required to complete each task. Without knowing how much time workers need to execute a task well, requesters struggle to accurately structure and price their work.Objective measures of effort could better help workers identify tasks that are worth their time.We propose a data-driven effort metric, ETA (error-time area), that can be used to determine a task's fair price.It empirically models the relationship between time and error rate by manipulating the time that workers have to complete a task.ETA reports the area under the error-time curve as a continuous metric of worker effort.The curve's 10th percentile is also interpretable as the minimum time most workers require to complete the task without error, which can be used to price the task.We validate the ETA metric on ten common crowdsourcing tasks, including tagging, transcription, and search, and find that ETA closely tracks how workers would rank these tasks by effort.We also demonstrate how ETA allows requesters to rapidly iterate on task designs and measure whether the changes improve worker efficiency.Our findings can facilitate the process of designing, pricing, and allocating crowdsourcing tasks.PDFCheng, J., Teevan, J. & Bernstein, M.S. (2015). Measuring Crowdsourcing Effort with Error-Time Curves. To appear at CHI 2015.Break It Down: A Comparison of Macro- and MicrotasksA large, seemingly overwhelming task can sometimes be transformed into a set of smaller, more manageable microtasks that can each be accomplished independently. In crowdsourcing systems, microtasking enables unskilled workers with limited commitment to work together to complete tasks they would not be able to do individually. We explore the costs and benefits of decomposing macrotasks into microtasks for three task categories: arithmetic, sorting, and transcription. We find that breaking these tasks into microtasks results in longer overall task completion times, but higher quality outcomes and a better experience that may be more resilient to interruptions. These results suggest that microtasks can help people complete high quality work in interruption-driven environments. PDFCheng, J., Teevan, J., Iqbal, S. T. & Bernstein, M.S. (2015). Break It Down: A Comparison of Macro- and Microtasks. To appear at CHI 2015.Flock: Hybrid Crowd-Machine Learning Classifiers
    
    Hybrid crowd-machine learning classifiers are classification models that start with a written description of a learning goal, use the crowd to suggest predictive features and label data, and then weigh these feat

    © 2010 - 2020 网站综合信息查询 同IP网站查询 相关类似网站查询 网站备案查询网站地图 最新查询 最近更新 优秀网站 热门网站 全部网站 同IP查询 备案查询

    2026-02-01 20:49, Process in 0.0060 second.