This article is produced by NetEase Smart Studio (public number smartman 163). Focus on AI and read the next big era!
[Netease smart news December 24 news] Millions of people use artificial intelligence (AI) in some form every day, and most of them are unconsciously carried out. People all over the world use it on mobile phones or on other platforms.
For example, Google search tips, Facebook friend suggestions, autocorrections, and predictive text are all made using artificial intelligence. Because of this, it is more important than ever that we need to increase the transparency of artificial intelligence globally so that we can understand how this technology works, make decisions, and provide the key for the medical and financial industries. s application.
I think we should all recognize that artificial intelligence given should not override the law. At the same time, artificial intelligence should not be overly regulated. We shouldn’t look at artificial intelligence this way, at least until we thoroughly analyze whether new standards and regulations can protect consumers or actually limit them from going bad. The scope of analysis should focus on the current application of artificial intelligence, its potential impact on the economy in the deployment of new areas, and the public’s perception of the technology.
At the same time, those of us who develop artificial intelligence in the global science and technology circle need to jointly solve their auditability and transparency issues. We should be committed to investing in the research and development of emerging artificial intelligence technologies, and this is exactly what many people cannot understand just by observing the algorithms. It is worth noting that corporate, government, and academic stakeholders have a responsibility to help people understand when artificial intelligence is most useful to them, why it can increase productivity, and, importantly, collect relevant data to present people. The interactive way.
Open artificial intelligence "black box"
The key to strengthening and expanding the transparency, knowledge, and understanding of artificial intelligence needs to be addressed in several ways. Let me explain it next.
Everyone who interacts with artificial intelligence should be aware that this is not human beings talking to us, and humans should not pretend to be robots under this application. Companies that use artificial intelligence technology to interact with customers and people should clearly know what the resulting data will bring to them. People who conduct chat tests with artificial intelligence-driven platforms and systems should record their conversational records and adjust them in the event of discrepancies, problems, or confirmation.
People who develop artificial intelligence for enterprise and enterprise applications need to create, acquire, and test various data responsibly. We need to introduce deviation detection tests to determine if artificial intelligence meets standards and agreed test protocols. Or we specifically, before the artificial intelligence test experiment is over, engineers need to simulate and grasp how their internal data sets interact with users in various environments.
Engineers need to test the products to make sure they are not harmful to humans. To protect users, they test product availability, security, scalability, and security of artificial intelligence applications, but there is currently no relevant test for their possible social, moral or emotional harm to humans. As an emerging industry, why don't we add bias testing in the development cycle to ensure that the algorithms in the artificial intelligence application are not biased and will not cause any chronic or traditional damage to the user?
AI engineers also need to share their best practices on artificial intelligence with new findings of the elimination of traditional concepts, both inside and outside the company. They need to make sure that the artificial intelligence they are developing and the data they use reflect the diversity of information users need. Most importantly, when there is a problem with technology, companies need to make difficult decisions to prevent artificial intelligence, such as setting up avoidance of biased algorithms or increasing the transparency of algorithms. By ensuring that technology and data are diverse, objective, and flawless, people will not encounter too many biased issues when trying to access this information.
The artificial intelligence industry should achieve self-management and be consistent with the goals of the corporate board of directors and the management team. These requirements should have a unified and optional principles and guidelines, such as the ethical guidelines we have published on Sage for the development and application of artificial intelligence Solutions. There should also be a partnership between the government and the company so that they can share information about AI security in real time. If such a true partnership is achieved, this will significantly increase the transparency of the use of artificial intelligence and provide examples for other jurisdictions that consider turning to artificial intelligence for business development, public services, and social welfare.
The company considered that an internal review of artificial intelligence should be conducted to understand where artificial intelligence can be used maximally and how many people need to be trained to better implement artificial intelligence-driven services. They should create projects related to training and accelerating the workforce to encourage more talents with artificial intelligence research and development skills to drive this work.
Artificial intelligence for transparency is a global effort
Fundamentally speaking, our technology community needs to define the meaning of artificial intelligence transparency and work together to apply transparency to innovative artificial intelligence technology. We need to stop holding this concept of artificial intelligence as a black box and resolve its auditability and traceability issues, and solving them will lead us to the right path. We need to spread knowledge about artificial intelligence and clarify its numerous use cases—from technology to medical care, from traffic to social security, and family life.
Due to the current lack of transparency and education in artificial intelligence, people have lost faith in the important technologies needed for this future development. If we can face this problem as an official industry, then we will be able to truly democratize information through technologies like artificial intelligence, which will concentrate us into a global technology and user community.
If we must strive for universal transparency, we must implement it at the level of algorithms and data. The global artificial intelligence community needs to work together to contain and eliminate prejudices through proper testing, training, and education. This is what I have been committed to doing in my career. And I'm not the only one who does this, including projects such as the Artificial Intelligence Committee of the UK Parliament, the Artificial Intelligence Institute at New York University, and the MIT Media Lab's artificial intelligence and governance initiatives, all working hard to create a transparent, Ethical artificial intelligence. We still have a lot of work to do. However, the transparency of artificial intelligence is something we are very much worth trying to achieve.
(From: techcrunch Compilation: Netease See Compiled Robot Reviewer: Fu Zeng)
Pay attention to NetEase smart public number (smartman163), obtain the latest report of artificial intelligence industry.
Indoor Cable,Indoor Fiber Optic Cable,Indoor Fiber Cable,Indoor Optical Fiber Cable
Huizhou Fibercan Industrial Co.Ltd , https://www.fibercan-network.com