Technology

How Can AI Be Used Safely? Expert Researchers Weigh In

7Views


Image: Shutter2U/Adobe Stock

An important focus of AI research is improving an AI system’s factualness and trustworthiness. Even though significant progress has been made in these areas, some AI experts are pessimistic that these issues will be solved in the near future. That is one of the main findings of a new report by The Association for the Advancement of Artificial Intelligence (AAAI), which includes insights from experts from various academic institutions (e.g., MIT, Harvard, and University of Oxford) and tech giants (e.g., Microsoft and IBM).

The goal of the study was to define the current trends and the research challenges to make AI more capable and reliable so the technology can be safely used, wrote AAAI President Francesca Rossi. The report includes 17 topics related to AI research culled by a group of 24 “very diverse” and experienced AI researchers, along with 475 respondents from the AAAI community, she noted. Here are highlights from this AI research report.

Improving an AI system’s trustworthiness and factuality

An AI system is considered factual if it doesn’t output false statements, and its trustworthiness can be improved by including criteria “such as human understandability, robustness, and the incorporation of human values,’’ the report’s authors stated.

Other criteria to consider are fine-tuning and verifying machine outputs, and replacing complex models with simple understandable models.

SEE: How to Keep AI Trustworthy from TechRepublic Premium

Making AI more ethical and safer

AI is becoming more popular, and this requires greater responsibility for AI systems, according to the report. For example, emerging threats such as AI-driven cybercrime and autonomous weapons require immediate attention, along with the ethical implications of new AI techniques.

Among the most pressing ethical challenges, the top concerns respondents had were:

  • Misinformation (75%)
  • Privacy (58.75%)
  • Responsibility (49.38%)

This indicates more transparency, accountability, and explainability in AI systems is needed. And, that ethical and safety concerns should be addressed with interdisciplinary collaboration, continuous oversight, and clearer responsibility.

Respondents also cited political and structural barriers, “with concerns that meaningful progress may be hindered by governance and ideological divides.”

Evaluating AI using various factors

Researchers make the case that AI systems introduce “unique evaluation challenges.” Current evaluation approaches focus on benchmark testing, but they said more attention needs to be paid to usability, transparency, and adherence to ethical guidelines.

Implementing AI agents introduces challenges

AI agents have evolved from autonomous problem-solvers to AI frameworks that enhance adaptability, scalability, and cooperation. Yet, the researchers found that the introduction of agentic AI, while providing flexible decision making, has introduced challenges when it comes to efficiency and complexity.

The report’s authors state that integrating AI with generative models “requires balancing adaptability, transparency, and computational feasibility in multi-agent environments.”

More aspects of AI research

Some of the other AI research-related topics covered in the AAAI report include sustainability, artificial general intelligence, social good, hardware, and geopolitical aspects.



Source link

Leave a Reply

Exit mobile version