ผลต่างระหว่างรุ่นของ "Foundations of ethical algorithms"
ไปยังการนำทาง
ไปยังการค้นหา
Jittat (คุย | มีส่วนร่วม) |
Jittat (คุย | มีส่วนร่วม) |
||
แถว 5: | แถว 5: | ||
* Week 1: Introduction | * Week 1: Introduction | ||
** เอกสารอ้างอิง | ** เอกสารอ้างอิง | ||
− | *** [https://dataprivacylab.org/projects/identifiability/paper1.pdf L. Sweeney, Simple Demographics Often Identify People Uniquely. Carnegie Mellon University, Data Privacy Working Paper 3. Pittsburgh 2000.] | + | *** Privacy |
− | *** Netflix Prize. [https://www.cs.cornell.edu/~shmat/shmat_oak08netflix.pdf Arvind Narayanan and Vitaly Shmatikov, How To Break Anonymity of the Netflix Prize Dataset] | [https://www.cs.cornell.edu/~shmat/netflix-faq.html FAQ] | + | **** [https://dataprivacylab.org/projects/identifiability/paper1.pdf L. Sweeney, Simple Demographics Often Identify People Uniquely. Carnegie Mellon University, Data Privacy Working Paper 3. Pittsburgh 2000.] |
− | *** GWAS privacy. [https://pubmed.ncbi.nlm.nih.gov/18769715/ Homer N, Szelinger S, Redman M, et al. Resolving individuals contributing trace amounts of DNA to highly complex mixtures using high-density SNP genotyping microarrays. PLoS Genet. 2008;4(8):e1000167. Published 2008 Aug 29. doi:10.1371/journal.pgen.1000167] | + | **** Netflix Prize. [https://www.cs.cornell.edu/~shmat/shmat_oak08netflix.pdf Arvind Narayanan and Vitaly Shmatikov, How To Break Anonymity of the Netflix Prize Dataset] | [https://www.cs.cornell.edu/~shmat/netflix-faq.html FAQ] |
− | *** Word embedding. [https://arxiv.org/abs/1607.06520 Bolukbasi, Chang, Zou, Saligrama, Kalai. Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings.] | + | **** GWAS privacy. [https://pubmed.ncbi.nlm.nih.gov/18769715/ Homer N, Szelinger S, Redman M, et al. Resolving individuals contributing trace amounts of DNA to highly complex mixtures using high-density SNP genotyping microarrays. PLoS Genet. 2008;4(8):e1000167. Published 2008 Aug 29. doi:10.1371/journal.pgen.1000167] |
− | *** COMPAS. [https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing Machine Bias (ProPublica)] | [https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm How We Analyzed the COMPAS Recidivism Algorithm (ProPublica) by Jeff Larson, Surya Mattu, Lauren Kirchner and Julia Angwin] | + | *** Fairness |
− | *** 2nd Wave of Algorithmic Accountability | + | **** Word embedding. [https://arxiv.org/abs/1607.06520 Bolukbasi, Chang, Zou, Saligrama, Kalai. Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings.] |
+ | **** COMPAS. [https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing Machine Bias (ProPublica)] | [https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm How We Analyzed the COMPAS Recidivism Algorithm (ProPublica) by Jeff Larson, Surya Mattu, Lauren Kirchner and Julia Angwin] | ||
+ | *** 2nd Wave of Algorithmic Accountability | ||
+ | **** [https://onezero.medium.com/the-seductive-diversion-of-solving-bias-in-artificial-intelligence-890df5e5ef53 Julia Powles and Helen Nissenbaum, https://onezero.medium.com/the-seductive-diversion-of-solving-bias-in-artificial-intelligence-890df5e5ef53] | ||
+ | **** [https://lpeproject.org/blog/the-second-wave-of-algorithmic-accountability/ Frank Pasquale, The Second Wave of Algorithmic Accountability] | ||
+ | **** [https://dl.acm.org/doi/abs/10.1145/3375627.3375839 Frank Pasquale. 2020. Machines Judging Humans: The Promise and Perils of Formalizing Evaluative Criteria. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (AIES ’20). Association for Computing Machinery, New York, NY, USA, 7.] | ||
+ | **** [https://boingboing.net/2019/12/04/fundamental-critique.html Doctorow, Second wave Algorithmic Accountability: from "What should algorithms do?" to "Should we use an algorithm?", BoingBoing] | ||
== อ้างอิง == | == อ้างอิง == |
รุ่นแก้ไขเมื่อ 03:19, 15 สิงหาคม 2563
หน้านี้สำหรับรายวิชา Foundations of Ethical Algorithms
เนื้อหา
- Week 1: Introduction
- เอกสารอ้างอิง
- Privacy
- L. Sweeney, Simple Demographics Often Identify People Uniquely. Carnegie Mellon University, Data Privacy Working Paper 3. Pittsburgh 2000.
- Netflix Prize. Arvind Narayanan and Vitaly Shmatikov, How To Break Anonymity of the Netflix Prize Dataset | FAQ
- GWAS privacy. Homer N, Szelinger S, Redman M, et al. Resolving individuals contributing trace amounts of DNA to highly complex mixtures using high-density SNP genotyping microarrays. PLoS Genet. 2008;4(8):e1000167. Published 2008 Aug 29. doi:10.1371/journal.pgen.1000167
- Fairness
- 2nd Wave of Algorithmic Accountability
- Julia Powles and Helen Nissenbaum, https://onezero.medium.com/the-seductive-diversion-of-solving-bias-in-artificial-intelligence-890df5e5ef53
- Frank Pasquale, The Second Wave of Algorithmic Accountability
- Frank Pasquale. 2020. Machines Judging Humans: The Promise and Perils of Formalizing Evaluative Criteria. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (AIES ’20). Association for Computing Machinery, New York, NY, USA, 7.
- Doctorow, Second wave Algorithmic Accountability: from "What should algorithms do?" to "Should we use an algorithm?", BoingBoing
- Privacy
- เอกสารอ้างอิง
อ้างอิง
รายวิชาจะอ้างอิงเนื้อหาจากหลายแหล่ง ดังนี้
- หนังสือ The Algorithmic Foundations of Differential Privacy โดย Cynthia Dwork และ Aaron Roth
- Science of Data Ethics - UPenn สอนโดย Michael Kearns และ Kristian Lum
- Ethics in Data Science - UTah สอนโดย Suresh Venkatasubramanian และ Katie Shelef
- Foundations of Fairness in Machine Learning - UW สอนโดย Jamie Morgenstern
- Explainable AI in Industry: Practical Challenges and Lessons Learned (ACM FAT* 2020 Tutorial)