Surge in Data Privacy Concerns Linked to Generative AI, Deloitte Report Reveals
The latest report from Deloitte has highlighted a significant increase in concerns about data privacy related to generative AI. The percentage of professionals who ranked it among their top three worries has jumped from 22% last year to 72% this year.
Transparency and data provenance were the next highest ethical GenAI issues, with 47% and 40% of professionals respectively ranking them in their top three for this year. However, only a small fraction (16%) expressed concern over job displacement.
An increasing number of employees are showing interest in understanding how AI technology works, especially when it comes to handling sensitive data. A study conducted by HackerOne last September revealed that almost half of security professionals consider AI as risky, viewing leaked training data as a potential threat.
In line with these findings, “safe and secureโ was one of the top three ethical technology principles for 78% business leadersโa 37% increase since 2023. This further emphasizes how security is becoming an increasingly important issue.
Data Source: Deloitte’s State Of Ethics And Trust In Technology Report
The survey results are derived from Deloitteโs โState of Ethics and Trust in Technologyโ report for 2024. The report surveyed over 1,800 business and technical professionals worldwide about the ethical principles they apply to technologies, specifically GenAI.
High-profile AI Security Incidents Garnering More Attention
A little over half of the respondents from this yearโs and last yearโs reports stated that cognitive technologies like AI and GenAI pose greater ethical risks compared to other emerging technologies such as virtual reality, quantum computing, autonomous vehicles, and robotics.
This shift in focus could be attributed to a broader awareness of data security importance due to widely publicized incidents. For instance, a bug in OpenAI’s ChatGPT exposed personal data of around 1.2% ChatGPT Plus subscribers, including names, emails, and partial payment details.
The trustworthiness of the chatbot was further undermined by news that hackers had infiltrated an online forum used by OpenAI employees and stolen confidential information about the company’s AI systems.
Suggested Reading: Artificial Intelligence Ethics Policy
โThe widespread availability and adoption of GenAI may have increased respondents’ familiarity and confidence in the technology, driving up optimism about its potential for good,โ said Beena Ammanath, Global Deloitte AI Institute and Trustworthy AI leader. However, she also noted that “the continued cautionary sentiments around its apparent risks underscore the need for specific, evolved ethical frameworks that enable positive impact.”
Impact of AI Legislation on Global Organisations
Naturally, more personnel are using GenAI at work than last year. The percentage of professionals reporting internal use has risen by 20% according to Deloitteโs year-over-year reports.
A staggering 94% stated their companies have integrated it into processes in some way. However, most indicated it is still in the pilot phase or usage is limitedโwith only 12% stating it’s widely used. This aligns with recent Gartner research which found most GenAI projects donโt make it past the proof-of-concept stage.
Suggested Reading: IBM: While Enterprise Adoption Of Artificial Intelligence Increases Barriers Are Limiting Its Usage
Regardless of how pervasive its use might be currently decision makers want to ensure their application of AI doesn’t land them into troubleโespecially when legislation comes into play. Compliance was cited as the top reason for having ethical tech policies and guidelines by 34% respondents while regulatory penalties were among the top three concerns reported if such standards are not adhered to.
The E.U. AI Act came into effect on Aug. 1 and imposes strict requirements on high-risk AI systems to ensure safety, transparency, and ethical usage. Non-compliance could result in fines ranging from โฌ35 million ($38 million USD) or 7% of global turnover to โฌ7.5 million ($8.1 million USD) or 1.5% of turnover.
Over a hundred companies including Amazon Google Microsoft and OpenAI have already signed the E.U.AI Pactand volunteered to start implementing the Actโs requirements ahead of legal deadlines demonstrating their commitment towards responsible AI deployment while also helping them avoid future legal challenges.
Suggested Reading: G7 Countries Establish Voluntary AI Code Of Conduct
In October 2023, the U.S unveiled an AI Executive Order, providing comprehensive guidance on maintaining safety civil rights privacy within government agencies while promoting innovation competition throughout the country Although it isn’t a law many U.S.-operating companies may make policy changes in response ensuring compliance with evolving federal oversight public expectations for safe use of AI technology.
The Global Impact Of The E.U.AI Act And The U.S.AI Executive Order
The E.U. AI Act has had a significant influence in Europe with 34% of European respondents stating their organisations made changes to their use of AI over the past year in response. However, its impact is more widespread as 26% of South Asian respondents and 16% North and South American respondents also made changes due to the Actโs implementation.
Furthermore, following the executive order, 20% U.S.-based respondents reported making changes at their organisations. A quarter (25%) South Asian respondents, followed by 21% in South America and12 %in Europe said they did likewise.
โCognitive technologies such as AI are recognized for having both high potential benefits for society but also high risks for misuse,โ wrote the authors of Deloitte’s report.
โThe accelerated adoption GenAI may be outstripping organizationsโ capacity govern technology Companies should prioritize both implementing ethical standards GenAI meaningful selection use cases which GenAI tools applied.โ
No Comment! Be the first one.