Apple delays European launch of new AI features, blames EU tech rules (2024)
Apple's ambitious push into artificial intelligence has hit a snag in Europe. The tech giant, which unveiled a suite of new AI features and software updates for its iPhones, iPads, and Macs earlier this month, has revealed that three of these features will not be available to EU users this year.
Apple Intelligence: Leverages AI to generate text, images, and various types of content based on user prompts. This feature was slated to be available on the iPhone 15 Pro, iPhone 15 Pro Max, and iPads and Macs equipped with Apple's M1 chip or newer.
EU's Digital Markets Act Raises Regulatory Concerns
Apple attributes the delay to regulatory uncertainties arising from the European Union's Digital Markets Act (DMA). The DMA, which aims to promote competition and prevent large tech platforms from unfairly dominating the market, introduces strict interoperability requirements.
"Specifically, we are concerned that the interoperability requirements of the DMA could force us to compromise the integrity of our products in ways that risk user privacy and data security," Apple said in a statement to Reuters.
The company expresses apprehension that adhering to the DMA's interoperability mandates could potentially create vulnerabilities that could be exploited to compromise user data.
Despite the delay, Apple emphasized to Reutersits commitment to finding a solution that would allow it to bring these AI features to its European customers without compromising their safety. "We are committed to collaborating with the European Commission in an attempt to find a solution that would enable us to deliver these features to our EU customers without compromising their safety."
Apple's advanced AI-powered features will not be available for users in the EU at the same time as the rest of the world, owing to the provisions of the Digital Markets Act
Digital Markets Act
The DMA aims at ensuring a higher degree of competition in European digital markets by preventing large companies from abusing their market power and by allowing new players to enter the market. This regulation targets the largest digital platforms operating in the European Union.
https://en.wikipedia.org › wiki › Digital_Markets_Act
The AI Act, has a clear, easy to understand approach, based on four different levels of risk: minimal risk, high risk, unacceptable risk, and specific transparency risk. It also introduces dedicated rules for general purpose AI models.
The Artificial Intelligence Act (AI Act) is a European Union regulation concerning artificial intelligence (AI). It establishes a common regulatory and legal framework for AI within the European Union (EU). It will come into force on 1 August 2024.
The new law aims to foster the development and uptake of safe and trustworthy AI systems across the EU's single market by both private and public actors. At the same time, it aims to ensure respect of fundamental rights of EU citizens and stimulate investment and innovation on artificial intelligence in Europe.
The Act also bans AI systems that scrape facial images from the internet or CCTV footage, infer emotions in the workplace or educational institutions, and categorize people based on their biometric data.
Enforcement provisions: The AI Act's operative provisions regarding penalties for non-compliance will also apply on this date, and EU member states will have been required to implement and notify the European Commission of their respective rules on penalties and other enforcement measures by then.
Europe is investing heavily in supercomputers and AI research to try to catch up with the US and create domestic champions. But Europe's AI challengers are starting from a long way behind. The continent lags a long way behind the US and China in the availability of capital and computing power.
This article outlines how to classify high-risk AI systems. An AI system is considered high-risk if it is used as a safety component of a product, or if it is a product itself that is covered by EU legislation. These systems must undergo a third-party assessment before they can be sold or used.
The EU AI Act also enumerates certain exceptions to its material scope (for example, the EU AI Act does not apply to open-source AI systems unless they are prohibited or classified as high-risk AI systems or AI systems used for the sole purpose of scientific research and development) (Arts.
Unacceptable risk AI systems are systems considered a threat to people and will be banned. They include: Cognitive behavioural manipulation of people or specific vulnerable groups: for example voice-activated toys that encourage dangerous behaviour in children.
As noted above, there is currently no comprehensive legislation in the US that directly regulates AI. However, the White House Executive Order on AI and proposed legislation at the federal and state level generally seeks to address the following issues: Safety and security. Responsible innovation and development.
Non-compliance with certain AI practices can result in fines up to 35 million EUR or 7% of a company's annual turnover. Other violations can result in fines up to 15 million EUR or 3% of a company's annual turnover.
The AI Office is to draw up codes of practice covering, but not necessarily limited to, obligations for providers of general purpose AI models. Codes of practice should be ready by 2 May 2025 and should provide at least a three-month period before taking effect.
To promote Legally Trustworthy AI, the Proposal must ensure (1) an appropriate allocation and distribution of responsibility for the wrongs and harms of AI; (2) a legitimate and effective enforcement architecture, including adequate mechanisms for transparency to secure the effective protection of fundamental rights ...
Governments across the world have only just begun to draft and pass laws tailored to AI technology. Heading into 2024, we expect both sector-specific and broader, omnibus AI regulations to impact nearly all industries as the use of AI expands.
In its common position, the Council narrowed down the definition of an AI system to systems developed through machine learning approaches and logic- and knowledge-based approaches. With this narrowed-down definition, the Council wanted to make the difference between AI and simpler software systems clearer.
The EU AI Act classifies AI systems into four different risk levels: unacceptable, high, limited, and minimal risk. Each class has different regulation and requirements for organizations developing or using AI systems.
3(1) EU AI Act defines an AI system as "a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, ...
Hobby: Reading, Ice skating, Foraging, BASE jumping, Hiking, Skateboarding, Kayaking
Introduction: My name is Cheryll Lueilwitz, I am a sparkling, clean, super, lucky, joyous, outstanding, lucky person who loves writing and wants to share my knowledge and understanding with you.
We notice you're using an ad blocker
Without advertising income, we can't keep making this site awesome for you.