Más informacíón aquí

The establishment of public sector policies and rules for developing and regulating artificial intelligence (AI) is what constitutes the regulation of artificial intelligence (AI); as a result, AI regulation is related to the more general regulation of algorithms. The legal and policy landscape for artificial intelligence is a developing topic in jurisdictions all over the world, notably in the European Union and in supranational groups like the IEEE and the OECD, amongst others. In an effort to keep society in control of artificial intelligence technology, a wave of AI ethical guidelines has been issued from the year 2016. Regulation is seen as important in order to both foster artificial intelligence and manage the risks connected with it. In addition to government regulation, AI-deploying enterprises need to play a central role in developing and deploying trustworthy AI in accordance with the principles of trustworthy AI, and they also need to take responsibility for minimizing the associated risks. Regulation of artificial intelligence using systems such as review boards is one example of a social means approach to solving the problem of AI control.