Important Aspects of EU's AI ACT
Opinion
- ChatGPT alike Generative AI are albeit not being considered as High Risk, but would’ve to adhere with the Copyright Regulations. And as the Copyright Act is governed by the BERNE Convention and there’s no Formality required, unlike in the case of WIPO’s Madrid, PCT & Hague; then that means ChatGPT and its cousins, friends, enemies; have to adhere by the Copyright Laws. This would be troubling for them. As they would’ve to publish the Data used for Training of AI models. If someone remembers, NYT sued ChatGPT because of the same reason. And as I also pointed out in my book and blogs that how algorithms being used by manipulation of DATA to index the pages in search; then that might also come under the ambit of this regulation.
- One of the main features of this AI Act is now that all the providers of the General Purpose AI Models have to produce technical documentation in accordance with the Copyright Directives as what content was used to train the models. But can it be calculated from Retrospective Point of View also, as the model developed is always incremental in nature, and not discreet!
- AIs been classified into from minimal risk (eg. spam filters) to high-risk AIs (biometrics, emotion recognition, workers management, law enforcement etc.) to unacceptable high risk (eg. manipulative AI) to prohibited AIs (social scoring, biometric categorizations etc.). In the case of the intermediate level like chatbots, deepfakes; the provider must ensure the deployer that they’re interacting with an AI system. And High Risk AIs would’ve to keep logs, including the time of usage, input match and search done, identification of the person involved etc. All in the name of contrived Free Speech! DUH!
- This isn’t applicable for Defense, National, Military or Security Purposes.
- One of the main aspect is it isn’t Applicable for R&D purposes, including what the output has been produced; which means, if in case AGI is developed, the output won’t be assessed by this Act. This is Wonderful! 😊
- AI system providers in any case need to provide documentation, IPR, technical descriptions, design specification, risk management. Thumbs Up!
- The deployers (Users) have less obligation than the providers (developers).
- The definitions of the Deployer, Provider, AI System, Representatives, Distributor, Operator, Authority, Notified Body etc. has been provided; but somehow, I’m now recalling the DABUS case right now. If I’m not wrong, and as per my knowledge and belief, no definition has been provided if in case the AI System becomes the Deployer and Provider both; then in that case, this Act might not Work. Again, I may be wrong, and this too is my hypothesis, yet, DABUS was also the reality that has already been discussed. How these Acts been Written, taken only the Natural Person in Approach, but, not Synthetic Being! But again, when and where the AGI gains consciousness; and if it would do, then there’s no point of any Act or Rule; rather depending on the evolutionary acts of that AI only. But furthermore, on one hand they are defining deployer as natural person, whereas in the same definition adding that, this act won't be applicable to those using AI Systems for personal non-professional activity; then what exactly is this personal non-professional activity when any natural person is also involved as a deployer?😊
- One more important feature is Training Data of the AI application needs to be provided, in accordance of the IPR, but as Training DATA used is always voluminous, would the provider be only providing the names of the Authors of any Work and its related Metadata which would comprise the reference what they have used, or the entire set of data, which would become voluminous; no idea.
- The High Risk AIs must document the logs over its entire duration.
- Does this also mean, Algorithms need to be provided by the Provider if in case, they are being used for the manipulation of the Search or who should appear First on the Page or on the Feed?
- If the AI has been rolled out as Open Source License or Free Version, then wouldn’t have to adhere to the obligations, unless it comes into the High Risk AI definition; Yet, the IPR law would always prevail. Thus, I’m not understanding, as most of the Internet Companies were/are mostly evolved VIA data theft; thus, how this would impact them? Would this be circumvented VIA the Internet Agreements alike Clickwrap, Browsewrap; including if in case the Deployers, Private Data has been used by the companies, as today almost all the companies are using at least to the Moderate Level RISK AI features, then can it all be circumvented by mere such Agreements? OR would be restricted only to the INFORMED CONSENT only for Testing / Beta Testing Phase, OR, these Agreements be expanded and applicable also for when the product being finally deployed; that needs to be addressed. Just a Dilemma! Because, then what’s the use of defining the term Testing Phase?
- Are these Tri-Quad-Party Agreements (Union-Provider-Third Party atc.) only restricted to how AI functions, or, and if not, how these Articles in the ACT would be applicable with the Deployer (w.r.t. a Natural Person)? A Dilemma, maybe Juvenile!
- Obligations for the High Risk AIs, is it cannot influence the Elections, thus, it also means Critics Laundering Program may END? 😊
- High Risk also includes, Border Control and Migration, thus, one can or can’t use to know the influx, would be on the onus of the member state only!
- If I’m not wrong, if this Act has been approved and adopted by other nations also, then I can assure everyone that we are going back to the era of late 90s Internet because only the applications like emails, travel bookings, chat applications, blogs, web portals, and social media platforms where the content is not shown until and unless the person searches for it! Which I desired! 😊
- ACTS/REGULATIONS OR NOT, how would EU come to know whether the AI has achieved AGI or not, and it’s not manipulating EU only? 😊 Now, this is EU’s Dilemma!
© Pranav Chaturvedi
No comments:
Post a Comment