首页 - Essay代写范文 > Essay代写范文-The Hidden Negative Ethical Threats

武汉刑客拭有限责任公司 『賭場的籌碼是多少錢』 -m.xn--vsqz1lupbd0o5ojqwsnjhzo9d0ea.photophox.com

发布于:2022-10-07 作者:留学写作网 阅读:1899

本文是一篇Essay代写范文,题目为:The Hidden Negative Ethical Threats,本文主要讨论了隐藏的负面道德威胁。算法导致许多学科的道德标准不断恶化,如果不加以干预,它们将对未来的社会正义构成巨大威胁。

After artificial intelligence has revealed its tremendous potentials to the world, algorithms are also gaining more attention from the public. More time, effort, and monetary investment is being poured into the research and development of algorithms. However, the proverb of “Every story has two sides” also applies to the situation of algorithms. While the developers are exploring and expanding the capabilities and complexities of algorithms, the hidden weaknesses are also exposed. When an engineer encounters a problem, he or she will try to relate the problem back to their knowledge, and eventually solve the problem by following applicable formulas. For algorithms, they usually compare the problems with their databases first, which is similar to a human referring to the knowledge he or she acquired. After identifying the problem, algorithms will follow formulas, procedures, or programs to produce a solution. The problem is, algorithms also have learning capabilities. They can learn from almost everything they encounter. Algorithms contribute to deteriorating ethical standards in many disciplines and without intervention they will place a huge threat for social justice in the future.

在人工智能向世界展示其巨大潜力之后,算法也越来越受到人们的关注。更多的时间、精力和金钱投入到算法的研究和开发中。然而,“每个故事都有两面”的谚语也适用于算法的情况。在开发人员探索和扩展算法的功能和复杂性的同时,隐藏的弱点也暴露了出来。当工程师遇到问题时,他或她会尝试将问题与他们的知识联系起来,并通过遵循适用的公式最终解决问题。对于算法,他们通常首先将问题与数据库进行比较,这类似于一个人引用他或她获得的知识。在确定问题之后,算法将遵循公式、过程或程序来产生解决方案。问题是,算法也有学习能力。他们几乎可以从所遇到的一切中学习。算法导致许多学科的道德标准不断恶化,如果不加以干预,它们将对未来的社会正义构成巨大威胁。

Before getting into the negative ethical implications of algorithms, it is important to understand what algorithms are. Algorithm is an important concept in computer science. Basically, an algorithm is a set of directives created to solve a specific type of problems. With the wide application of computer applications, the algorithm has penetrated the field of human knowledge and profoundly changed human thinking. It is an important tool for humans to understand the world and transform the world. Robin K Hill (37) also explained the characteristics of the algorithm in his article "What an Algorithm Is". He believes that the algorithm is limited and needs to be expressed in a limited time and space. The algorithm is also abstract and has no spatiotemporal trajectory. This means that the algorithm is universal and can be applied to a single specific instance of the task (Hill 42). In the era of weak artificial intelligence, algorithms are the brains of computer programs that replace human labor and greatly enhance the efficiency of the decision-making process.

There are many people who confuse algorithm ethics with robot ethics. Pop culture such as novels and movies have depicted different types of robots that either taken control of the world or become loyal servants to human beings. They create the sense that robots and AI ethics are not that relevant to the real life. However, in addition to robots, the specific forms of artificial intelligence implementation are diverse, such as various intelligent software and chat robots currently used in medical, legal, educational, financial, even music, painting and other fields, which are driven by powerful calculation abilities and big data, as well as intelligent technologies such as natural language understanding and speech recognition (Tzafestas 55). Such applications also bear many ethical responsibilities different from robot ethics. At present, most discussions on the ethics of artificial intelligence focus on the ethics of robots, because this part is most likely to resonate with the public. However, more attention should be given to those invisible intelligent software and algorithms because unlike robots from the Si-Fi movies, they have already created profound implications on the human society.

Discrimination is one of the most obvious ethical problems caused by algorithms nowadays.  Discrimination is a long-term topic originated from the colonial history. Whether it is racial discrimination or class discrimination, it has entered the public realm for centuries at least. However, algorithm discrimination is still a relatively new concept in today's world. The algorithm itself is essentially a mathematical expression. According to Green and Viljoen (5), before the algorithm makes a decision, it will use all its own logic as a standard, based on information and data, to divide people into different categories and affix various types of labels on people. When making algorithm decisions, the labels on people will be a certain measure, which will further affect decisions and contribute to discrimination implicitly. Algorithmic discrimination has the premise of embedding human values. Algorithms are not created out of thin air, but are designed by human beings. Even if the designers or an algorithm is totally fair and unbiased, an algorithm still goes through tremendous risks of contamination after processing data from the internet. In other words, algorithms naturally and inevitably contain ethical features.

Algorithm discrimination is going to pose a great threat to social justice and ethical values with the increasing application scenarios. In March of 2016, Microsoft ’s chat robot Tay was released to interact with Twitter users. Within one day, the “maiden chatbot, of innocent heart, benevolent desires, and amiable disposition” was corrupted into “a malevolent, foulmouthed crone” (Davis 21). Tay was badly influenced as soon as it started chatting with users, and became an extremely racist algorithm in such a short period of time. This case reveals the discrimination problems that artificial intelligence and machines cannot avoid. Is it reasonable for the algorithmic robot behind a financial institution to refuse a loan to someone? Would legal software determine someone's crime because of a person's race or ethnicity? Since artificial intelligence tends to systematically replicate all human ethical flaws, both intentional and unintentional, in the data used for learning, there are countless opportunities for negative ethical implications. This creates a hidden but alarming future for AI algorithms.

Another major application of algorithms in the modern world is personalized recommendations on the internet. Many developers will consider the user's sensitivity to price when designing a product recommendation system. If a user is often opting for the cheaper products in a category, the algorithm designed by the developer determines that the user is highly price-sensitive and relatively "poor." At this time, when the system wants to recommend him or her, it will give priority to some particularly cheaper items. There are also instances where an algorithm detects the user type or user history and charges differently for the same products. For example, some games detect the type of devices of the users and charge iOS users more than the Android users. In some shopping websites, if a customer has turned into a loyal customer and makes regular purchases, the level of discount will be reduced for such customers and the deduction opportunities will be given new customers who have never purchased at the store. Such discrimination has outrage customers in the past. In the long term, customers are getting stereotypical treatment by the algorithms without knowing it, and their own interest are hurt.

There are many people who consider the above-mentioned ethical problems to be nothing but minor side effects of the new technology. For example, some people hold the belief that since algorithms are based on mathematics, which is not biased or discriminatory. “Governments are increasingly looking to utilize Automated Decision Systems (sometimes called “Robo-adjudication”) to gain efficiencies in the administrative state” (Beatson 316). Indeed, machines are superior to human beings in the ability to process large amounts of data and obtain “logical” results. However, there are serious limitations of mathematics when it comes to applications in the social sector. For example, there are still a high frequency of biased results generation by AI-Supported Adjudication. Therefore, the role of AI algorithms is largely confined within providing relevant information in court for references, and the ultimate decision-making power is still in the hands of human beings (Beatson 330). Although mathematics is an extremely precise and powerful tool, there are certain values and objects in the physical world that simply cannot be quantified. Some practical questions are well-beyond the scope of math and reveal that math is not the ultimate answer to ethical algorithms.

There are also people who consider algorithm discrimination to be not a big deal. Indeed, the racist chatting robot and the personalized marketing of products do not seem like serious ethical problems, and may be fixed with minor patches. However, such arguments have failed to consider the long-term implications on social justice. For example, a crime risk assessment algorithm “COMPAS” used by some courts in the United States is considered to cause systematic discrimination against black people (Beatson 315). If a black man commits a crime, he is likely to be mislabeled by the system as having a high risk of crime, and thus be sentenced to imprisonment or a longer period of time by the judge instead of the probation. In addition, similar incidents of discrimination are gradually increasing. Some recommendation algorithm may seem harmless, but if the biased algorithm is applied to criminal justice, financial credits, employment assessments where central personal interests are concerned, they cause much severer damages. Therefore, algorithmic discrimination is indeed a much bigger deal that many people would expect.

In the foreseeable future, human dependence on algorithms will only increase. If the ability of humans to process information is further weakened in the future, it is likely that the machines push us more information, and humans become more willing to rely on algorithms and machines. For example, to travel to a city, people may need a machine to recommend a traveling route. When entering an education system, people may hope to obtain high-quality course recommendations. When planning a future career, algorithms will list the planning options for people to choose from. At this time, the computer will make recommendations that are most suitable for a person based on the previous behaviors, the family background, and consumption habits. Consequently, the poor and the rich may end up in different parts of the city and may have entirely different life plans and goals even before their life even begins. The gap between the rich and the poor may be further widened. The more people are relying on algorithms, the more they are impacted by the ethical problems.

There are also people arguing that the ethical implications of algorithms are controllable by controlling the coding process. The code is the governing directive of all algorithms and by controlling the coding process, the program designer will also gain a fair level of control over the outcomes of the algorithm. For example, according to Florez, et al., some “architecture of the code can be exclusionary or discriminatory”. In the predictive policing sector, for instance, code structures can be the fundamental reason why racial minority groups are targeted more frequently and convicted with heavier crimes (Florez, et al. 155). However, despite the importance of code architecture, controlling the coding process does not mean control over the ethical implications of the entire algorithm. As illustrated in the case of Tay above, while the designers of the robot did not have an exclusionary architecture, the input from the Twitter users became the major source of negative impact. Some “primitive” algorithms are already behaving in puzzling ways that are incomprehensible for human beings. The input data shown in the Tay case is also adding to endless uncertainties. Therefore, trying to make fully “controllable algorithms” seems to be an impossible mission in the current stage.

Having realized the potential dangers of algorithms, it becomes important to find measures to mitigate the negative implications. Transparency is the first potential solution to algorithm discrimination and other ethical problems. Algorithm discrimination, from the root point of view, is caused by improper design and application of algorithms. In order to control the social impact of the algorithm from the root, transparency and accountability are the key. There have been voices opposing transparency because some people regard transparency as the enemy of accuracy or efficiency (Martin 842). However, transparency and accuracy are not necessarily contradicting against each other because the transparency of algorithms does not require full transparency. Instead, there is a whole range of transparency created targeting different goals of bias detection, process justice, corporate responsibility, etc. (Martin 845). Even though algorithms seem to be difficult to understand, transparency efforts will be the first step towards accountable coding.

In addition to transparency, accountability of the algorithm from the enterprise perspective is also needed to mitigate the negative ethical implications. For example, OPAL open algorithm technology is an innovative solution proposed by the Massachusetts Institute of Technology, and TalkingData is also involved in R&D and application. The principle is similar to federation learning, and the calculation result is obtained by moving algorithms without moving data. But OPAL is more flexible. It not only supports machine learning algorithms, but also other statistical algorithms, and has a wider application (Lepri, et al. 621). By adding technologies such as auditing and blockchain, designers can avoid risks of contamination more comprehensively. In addition to technology, management is also an important factor to contribute to accountability. Taking the current pandemic as an example, the personal information we provide for disease prevention should be collected in private, encrypted and transmitted, and only those with the corresponding level and authorization can view the limited and necessary real personal information, so that labeling and discrimination are minimized.

Finally, industry ethical standards are needed to regulate companies. The collaboration between different stakeholders is necessary for such standards to be established. The government is responsible for providing legal frameworks that guarantee the basic rights of users and basic code of conducts for the companies. The companies are responsible for following the government regulations and helping their customers make informed choices about their algorithm products and services. A sense of corporate social responsibility needs to be established so that every algorithm goes through an assessment system to determine their ethical implications before release into the market. There may also be third party supervisors that do not have involvement in the profit of the companies to establish fair standards that apply to all competitors in the market. In addition, the media could also assume supervising responsibilities and help the public identify the potential discriminating or unethical algorithms.

We live in an era of unprecedented algorithms. While bringing convenience and benefits to the lives of human beings, algorithms also bring discrimination, biases, and other ethical problems in their application. Such problems are already revealing in some cases, such as the Amazon recommendation system and the Tay robot by Microsoft. While algorithms are essentially based on mathematics and are controllable to a large extent, the increasing use of big data and deep learning are creating huge uncertainties in the morality of algorithms. Without effective intervention, disasters in morality and social justice could happen in the future. Therefore, the current ethical breaches of algorithms should not be underestimated, because they point to greater risks of social injustice in the future where human beings are more and more dependent on algorithms. To avoid such an unjust future, transparency, accountability, and ethical standards are urgently needed for all types of algorithms.

我们生活在一个前所未有的算法时代。算法在给人类生活带来便利和利益的同时,也带来了歧视、偏见和其他伦理问题。这样的问题在一些案例中已经暴露出来,比如亚马逊推荐系统和微软的Tay机器人。虽然算法基本上是建立在数学基础上的,在很大程度上是可控的,但大数据和深度学习的日益使用正在给算法的道德带来巨大的不确定性。如果没有有效的干预,道德和社会正义的灾难可能会在未来发生。因此,当前对算法的道德违背不应被低估,因为它们指出,在人类越来越依赖算法的未来,社会不公正的风险更大。为了避免这样一个不公正的未来,所有类型的算法都迫切需要透明度、问责制和道德标准。

以上就是本篇Essay代写范文全部内容,欢迎阅读,范文内容和格式仅供留学生参考学习,不得抄袭,如有Essay代写需要请咨询网站客服。请认准留学写作网官方网站:www.emwchinese.com


扫一扫关注我们

版权声明:本站文章来源为原创以及网络整理,意在为留学生分享各种Essay写作技巧以能够顺利完成学业,Essay写作格式以及Essay范文仅供学习参考,不得抄袭。如本站文章和转稿涉及版权等问题,请及时联系本站,我站将在第一时间予以删除。

标签: #essay代写 #essay代写范文 #essay范文

相关文章

  • Essay代写范文-A Heroic Film in American Hollywood

    本文是一篇优秀的美国Essay代写范文,题目为:A heroic film in American Hollywood,这篇Essay讨论了美国好莱坞的英雄主义...

    2022-10-07

  • Speech代写范文-Greenhouse effect

    本文是一篇Speech代写范文,题目为:Greenhouse effect,本文主要讨论了温室效应。这篇演讲将从温室效应的起源、影响和对策三个方面进行分析。这将...

    2022-10-07

  • Essay代写范文-Taxation Law Question

    本文是一篇加拿大会计Essay代写范文,题目为:Taxation Law Question,本文主要讨论了税法问题。税收减免减少了个人或企业应纳税的收入金额,从...

    2022-10-07

  • 如何鉴别澳洲 Essay代写机构是否靠谱?

    澳洲Essay代写这个行业水很深,很多黑心的代写机构在客户交钱之前待客户如上帝,等论文成绩出来之后客户提出修改要求,就玩失踪了。留学写作网根据多年行业经验总结出...

    2022-10-07

  • 学术写作干货|英文Essay到底怎样写?

    最近正值期末,我们留学写作网小编也收到了许多同学,特别是来自新生的关于作业写作的问题。这些问题不只是来自于英文不是母语的留学生,本地学生也面对着一些相同的问题。...

    2022-10-07

  • 澳洲Essay代写高分秘籍

    对写作不擅长的中国留学生来说,动辄一两千的essay和report简直是太难顶了!相信很多澳洲留学小伙伴们有着相同的烦恼。但是没有难写的essay,只有勤奋的留...

    2022-10-07

发表评论

QQ客服

电话咨询
QQ客服
lol比赛在哪儿押注 哪里可以买S12电竞 正规的足球在哪里可以买 博鱼体育官方网站 ag百家乐的平台网址
亚博IM电竞网址 线上电子娱乐游戏平台 凯发k8网址多少 lolS12外围下注 足球比赛买球软件
世界杯彩票怎么买 华体会体育官方网站 MG电子网站 英雄联盟S12靠谱投注网址有哪些 bbin视讯直营平台网址是多少
推荐一个靠谱外围买球网址 世界杯买球平台网站 华体会靠谱吗 亚博平台网址是什么 华体会网址入口