According to a 2024 report from the Centre for the Governance of AI at the University of Oxford, 82% of Americans & Europeans believe that AI hallucinations should be carefully managed. Concerns cited ranged from how AI is used in startups w.r.t surveillance and in spreading fake content online to cyberattacks, infringements on data privacy, hiring bias, autonomous vehicles, and drones that don’t require a human controller.
What happens when injustices are propagated not by individuals or startups but by a collection of machines? Lately, there’s been increased attention on the downsides of AI it may produce, from unequitable access to opportunities to the escalation of polarisation in our communities.
Not surprisingly, there’s been a corresponding rise in discussion around how to manage AI hallucinations and look at strategic AI ethics management for startups.
AI has already shown itself very — which can lead to unfair decisions based on attributes that are protected by law. There can be bias in the data inputs, which can be poorly selected, outdated, or skewed in ways that embody our own historical societal prejudices.
Most deployed AI systems do not yet embed methods to put data sets to a fairness test or otherwise compensate for problems in the raw material.
There also can be bias in the algorithms themselves and in what features they deem important (or not). For example, AI startups may vary their product prices based on information about shopping behaviours. If this information ends up being directly correlated to gender or race, then AI is making decisions that could result in a startup nightmare, not to mention legal trouble.
As these AI systems scale in use, they amplify any unfairness in them. The decisions these systems output, and which people then comply with, can eventually propagate to the point that biases become global truth.
Strategic AI Ethics Management For StartupsOf course, they will operate under. Microsoft president Brad Smith has written about the need for startups on exercising regulation and ethics around facial recognition technology.
Google established an AI ethics advisory council board. Earlier this year, Amazon started a collaboration with National Science. While we have yet to reach certain conclusions around tech regulations, the last three years have seen a sharp increase in forums and channels to discuss governance.
The Institute of Electrical and Electronics Engineers (IEEE), an engineering, computing, and technology professional organisation that establishes standards for maximizing the reliability of products, put together a crowdsourced global treatise on ethics of autonomous and intelligent systems.
There is a need for startups to develop a global perspective on AI ethics. Different startups around the world have very different perspectives on privacy and ethics. Within Europe, for example, UK citizens are willing to tolerate video camera monitoring on London’s central High Street, perhaps because of IRA bombings of the past, while Germans are much more privacy oriented, influenced by the former intrusions of East German Stasi spies. In China, the public is tolerant of AI-driven applications like facial recognition and social credit scores at least in part because social order is a key tenet of Confucian moral philosophy.
Accordingly, startups are working to create standardised AI ethics and compliance frameworks. Microsoft’s AI ethics research project involves ethnographic analysis of different cultures, gathered through close observation of behaviours, and advice from external academics such as Erin Meyer of INSEAD.
Eventually, we could foresee that there will be a collection of policies by startups consortium on how to use AI and related technologies. Some have already emerged, from avoiding algorithmic bias to model transparency to specific applications like predictive policing.
Mitigating AI Hallucinations For StartupsAs startups pour resources into designing the next generation of tools and products powered by AI, people are not inclined to assume that these startups will automatically step up to the ethical and legal responsibilities if these systems go awry.
The time when startups can simply ask the world to trust artificial intelligence and AI-powered products is long gone. Trust around AI requires fairness, transparency, and accountability. But even AI researchers can’t agree on a single definition of fairness: There’s always a question of who is in the affected groups and what metrics should be used to evaluate, for instance, the impact of bias within the algorithms.
Since startups have not figured out how to stem the tide of “bad” AI, their next best step is to be a contributor to the conversation. Denying that bad AI exists or fleeing from the discussion isn’t going to make the problem go away.
Identifying startup founders who are willing to join in on the dialogue and finding entrepreneurs willing to help establish standards are the actions that startups should be thinking about today. There comes the aspect of Chief AI ethical officer to evangelise, educate, ensure that startups are made aware of AI ethics and are bought into it.
When done correctly, AI startups can offer immeasurable good. AI startups solutions can provide educational interventions to maximise learning in underserved communities, improve health care based on its access to our personal data, and help people do their jobs better and more efficiently.
Now is not the time to hinder progress. Instead, it’s the time for startups to make a concerted effort to ensure that the design and deployment of AI are fair, transparent, and accountable for all stakeholders and devoid of AI hallucinations— and to be a part of shaping the coming standards and regulations in AI ethics that will make AI work for all.
The post appeared first on .
You may also like
Kerala CM not to attend NITI Aayog meeting
Murder accused shot by TN Police after attacking officers in Salem
Seer arrested for raping minor in Karnataka's Belagavi
Rahul Gandhi reaches J&K's Poonch to meet families affected by Pakistani shelling
"Poonch has been worst affected by Pakistani shelling": JKPCC President Tariq Hamid Karra