Monday, 26 May 2025

Meta Wins Legal Battle

 

Meta Wins Legal Battle: Can Train AI with EU User Data

Introduction

On 23 May 2025, the Higher Regional Court of Cologne in Germany made an important decision. The court dismissed an attempt to stop Meta Platforms Inc. (the company behind Facebook and Instagram) from using European users’ data to train artificial intelligence (AI). This case is very important for the future of AI and data privacy laws in Europe.

Meta had earlier said it would use public posts from European Union (EU) users to help train its AI models. This announcement raised a lot of concern among people and organizations. Some felt their data was being used without clear consent, while others supported Meta's plan, saying it was legal under current rules.

This article explains the background of the case, the legal arguments from both sides, the court’s decision, and what this means for companies and users across the EU.


What Did Meta Announce?

In mid-2024, Meta informed its users in the EU that it planned to use public posts—like comments, photos, and videos on Facebook or Instagram—for training AI systems. This includes generative AI, which is used to build tools like chatbots, translation software, or content creation systems.

Meta also said that users had a chance to opt out if they didn’t want their data to be used. The deadline for opting out was set to 27 May 2025. If users did nothing, Meta would consider that as permission to use their data.

But some groups, like the Consumer Protection Organization of North Rhine-Westphalia, argued that this process was unfair. They said that users should first be asked for permission (opt-in), rather than being automatically included unless they say no.




The Legal Background: GDPR and DMA

In Europe, there are strict laws about how companies can use people’s personal data.

General Data Protection Regulation (GDPR)

The GDPR is the main privacy law in the EU. It says that companies must have a clear legal reason to use personal data. Meta said it was using a concept called “legitimate interest”, which allows data to be used if:

  • It’s necessary for the company’s goals, and

  • It doesn’t seriously harm the user’s rights.

Meta also said it had given people a clear way to opt out, which reduces harm to users.

But consumer groups disagreed. They said Meta should be using "explicit consent" (also called opt-in). This means users must say “yes” before their data is used. They also said that some types of data—like health information or religion—are more sensitive and need stronger protection.

Digital Markets Act (DMA)

Another law called the Digital Markets Act (DMA) is also important. The DMA applies to big tech companies, like Meta, who are called "gatekeepers". These companies must follow special rules to avoid using their power unfairly.

One big question was whether Meta’s AI training was combining user data from different platforms, like Facebook, Instagram, and WhatsApp, which could be against the DMA.


Mixed Opinions from Data Protection Authorities

Irish Data Protection Authority (DPC)

Because Meta’s EU headquarters is in Ireland, the Irish Data Protection Commission (DPC) is its main data regulator in Europe. After more than a year of investigation, the Irish DPC approved Meta’s AI training plan. It said that Meta had made some good changes:

  • More transparent notices

  • Easier opt-out forms

  • Clear explanations about how the data would be used

The DPC said it would review the situation again in October 2025, to make sure everything stays within the law.

Hamburg Data Protection Commissioner (HmbBfDI)

Not all regulators agreed. The Hamburg Data Protection Commissioner (HmbBfDI) in Germany had a very different opinion.

Just before Meta’s AI training was set to begin, the HmbBfDI started urgent legal action against Meta. They asked for AI training in Germany to be put on hold for at least three months. Meta had to reply to this request by 26 May 2025.

The Hamburg authority raised several serious concerns:

  • Why was Meta using such large amounts of data?

  • Even if names were removed (called “de-identification”), could users still be harmed?

  • Are public posts truly public if they are only visible after logging in?

  • What about old data, which people shared years ago—did they know it could be used for AI?

  • What about people shown in pictures who don’t even have Facebook accounts?

These questions show that data privacy laws are still evolving, especially when it comes to new technologies like AI.


Court Case: Consumer Group vs. Meta

Because of the controversy, the Consumer Protection Organization of North Rhine-Westphalia filed a case at the Higher Regional Court of Cologne. They wanted to stop Meta from using user data for AI, at least temporarily.

The consumer group said:

  • Meta’s legal basis (legitimate interest) was not good enough.

  • Meta should ask for consent (opt-in) from users.

  • Meta was violating the DMA by mixing data from different platforms.

But the court rejected the case. It ruled that:

  • Meta’s interest in training AI was stronger than the harm to users.

  • Meta had reduced the risks by giving users clear options to opt out.

  • There was no illegal combination of personal data, as Meta said it did not merge individual data from different platforms.

This decision was a big win for Meta.


What the Court Decision Means

The court’s decision doesn’t mean that anything goes for AI training in Europe. But it does show that AI training is not automatically illegal, even if it uses personal data.

The case gives us several key lessons:

  1. AI can be trained using user data, but companies must follow strict rules.

  2. Transparency is essential. Users must know what’s happening.

  3. Opt-out systems may be legal in some cases, but this is still debated.

  4. Different EU authorities may have different opinions, causing legal uncertainty.

  5. Legal reviews and court cases will continue, especially as new AI tools emerge.


Challenges with Data and AI

AI systems need lots of data to become smart and useful. But much of this data is about people—what they say, do, or share online.

This creates a conflict:

  • AI developers want more data to improve their tools.

  • Privacy advocates want better control for users.

Even if names or faces are removed, patterns in the data can still identify people. For example, a unique combination of location, time, and interest might reveal who someone is.

Also, older posts may have been shared under different terms. People didn’t know AI training was a possibility back then.

These questions are difficult to answer, and they show how quickly technology moves ahead of the law.


What Should Companies Do Now?

For companies like Meta—and any business using AI trained with user data—this case sends a clear message:

  1. Follow GDPR and DMA closely.

  2. Use transparency notices that are simple and easy to understand.

  3. Offer easy opt-out options.

  4. Work with data protection authorities early.

  5. Separate sensitive data like health or religion from general data.

  6. Avoid combining data from different services unless users are informed.

Companies must build trust with users and prove they’re acting responsibly. AI is powerful, but it must be used fairly.


What About the Users?

If you are a Facebook or Instagram user in the EU, here’s what you should know:

  • Your public posts may be used to train AI.

  • Meta sent emails to inform you about this.

  • You have the right to object and stop your data from being used.

  • The deadline to opt out was 27 May 2025.

  • You can still ask Meta about your data and file complaints with your country’s data protection authority.

Being informed helps you make better choices.


Conclusion

The case between Meta and the Consumer Protection Organization in Germany is a landmark in the discussion about AI and personal data. It shows that:

  • Laws like GDPR and DMA are being tested in real-time.

  • AI is here to stay, and the way it’s trained matters a lot.

  • Courts and regulators are still figuring out how to balance innovation with privacy.

While the court ruled in Meta’s favor, the debate is far from over. More decisions will follow as AI grows in importance. For now, this case sets a precedent: companies can train AI with existing data if they follow proper rules and offer users real choices.

As AI becomes a part of daily life, it’s up to companies, governments, and users to protect rights and encourage innovation at the same time.

Monday, 17 March 2025

The Department of Telecommunications (DoT) has announced the launch of the 5G Innovation Hackathon 2025

 The Department of Telecommunications (DoT) has announced the launch of the 5G Innovation Hackathon 2025, a comprehensive six-month program aimed at developing cutting-edge 5G-powered solutions to address a range of societal and industrial challenges. The initiative is open to undergraduate and postgraduate students, startups, and professionals, providing them with a unique opportunity to innovate using 5G technology.



Key Features of the Program:

  • Mentorship and Funding: Participants will receive guidance from experts, seed funding, and access to over 100 5G Use Case Labs, facilitating the development of their ideas into viable prototypes.
  • IPR Assistance: Participants will also benefit from support in filing Intellectual Property Rights (IPR) to help commercialize their innovations.
  • Focus Areas: Proposals are encouraged in areas like AI-driven network maintenance, IoT solutions, 5G broadcasting, smart health, agriculture, industrial automation, V2X, NTN, D2M, and quantum communication, among others. Participants will be urged to leverage features such as network slicing and Quality of Service (QoS).

Program Stages & Timeline:

  1. Proposal Submission: Proposals are to be submitted between March 15 and April 15, 2025. Institutions can nominate up to five proposals for evaluation by the DoT.
  2. Regional Shortlisting: 150–200 selected teams will receive further guidance to enhance their solutions. The top 25–50 teams will be shortlisted for the Pragati Phase.
  3. Pragati Phase (June 15 – September 15, 2025): Teams will receive ₹1,00,000 in seed funding to develop prototypes, access to 5G Use Case Labs, mentorship, and testing infrastructure.
  4. Evaluation and Showcase (September 2025): Teams will present their prototypes to a Technical Expert Evaluation Committee (TEEC), with evaluation based on technical execution, scalability, impact, and novelty.
  5. Winners Announcement (October 2025): The top teams will be showcased at the India Mobile Congress (IMC) 2025.

Awards and Recognition:

  • 1st Place: ₹5,00,000
  • Runner-Up: ₹3,00,000
  • 2nd Runner-Up: ₹1,50,000
  • Special Mentions: Best Idea and Most Innovative Prototype, each receiving ₹50,000
  • Certificates of Appreciation: Awarded to 10 labs for the Best 5G Use Case and one for the Best Idea from Emerging Institutes.

The hackathon, with a budget of ₹1.5 crore, aims to develop over 50 scalable 5G prototypes, generate more than 25 patents, and foster collaboration across academia, industry, and government. This program aligns with India’s vision to establish itself as a global leader in 5G innovation and applications.

Important Dates:

  • Proposal Submission: March 15 – April 15, 2025
  • Final Winners Announcement: October 1, 2025

This initiative is a significant step toward harnessing 5G technology's potential and nurturing a new generation of innovations that could drive progress in multiple sectors.

Friday, 17 January 2025

AI and machine learning (ML) have become cornerstones of fintech

 AI and machine learning (ML) have become cornerstones of fintech, driving innovations across various domains in 2025. Here are the key areas where AI is revolutionizing fintech operations and decision-making:

  1. Enhanced Risk Management: AI and ML models analyze massive amounts of data, detecting patterns that would be impossible for humans to identify. This allows fintech companies to predict and mitigate risks in real time, reducing exposure to fraud and credit default. AI-driven credit scoring systems have become more accurate, allowing financial institutions to assess risks more holistically.

  2. Automated Decision-Making: AI streamlines decision-making processes by automating routine tasks such as loan approvals, customer verification, and transaction monitoring. This automation enables faster processing times, reducing customer friction and freeing up human resources for more complex tasks.

  3. Personalized Financial Products: AI's ability to analyze user behavior and preferences allows fintech companies to offer highly personalized financial products and services. Machine learning algorithms create tailored investment portfolios, personalized loan products, and customized insurance plans based on the unique needs of individuals and businesses.

  4. Fraud Detection and Prevention: With the rise of digital transactions, fraud has become a significant concern in fintech. AI systems are revolutionizing fraud detection by monitoring vast datasets, identifying unusual patterns, and flagging potentially fraudulent activities in real time. These systems continuously learn and adapt to new threats, making fraud prevention more effective over time.

  5. Customer Service and Engagement: AI-driven chatbots and virtual assistants are reshaping customer service in fintech. These systems handle queries 24/7, provide personalized advice, and help customers manage their finances more effectively. The increased use of natural language processing (NLP) ensures that interactions feel more human-like and responsive.

  6. Algorithmic Trading: AI and ML have taken algorithmic trading to new heights. By processing vast amounts of market data, these algorithms make faster and more informed trading decisions. AI helps predict market trends and optimize trading strategies, giving fintech firms a competitive edge in the stock and cryptocurrency markets.


  7. Regulatory Compliance: Regulatory technologies (RegTech) powered by AI help fintech companies stay compliant with ever-evolving regulations. AI systems can automatically track changes in financial laws, identify areas of non-compliance, and ensure that companies adhere to legal standards, thereby reducing the risk of penalties and enhancing trust with regulators.

  8. Blockchain and Smart Contracts: AI is playing a significant role in enhancing the security and efficiency of blockchain technology. In fintech, AI-driven smart contracts automatically execute transactions when predefined conditions are met, eliminating the need for intermediaries and ensuring transparency and security in financial agreements.

In 2025, fintech operations in India and globally are no longer just about processing data but about deriving actionable insights that inform better business decisions. AI's continuous learning capability ensures that fintech firms can stay agile, innovative, and customer-focused in an increasingly competitive market.

Tuesday, 14 January 2025

Automated technology to handle 43% of work by 2030: Report

Automated technology to handle 43% of work by 2030 



According to the World Economic Forum's "Future of Jobs Report 2025", the UAE is expected to experience significant job market disruptions, ranking 11th globally in terms of anticipated changes. The report predicts that by 2030, 43% of work tasks in the UAE will be handled by autonomous technologies. This shift is a part of a broader trend where businesses are increasingly integrating automation and AI to enhance efficiency.

In response to these anticipated disruptions, 28% of UAE employers plan to upskill their workforce to adapt to these technological changes. Upskilling will likely focus on equipping workers with the necessary skills to work alongside AI and automation technologies, as well as to take on roles that require human creativity, judgment, and strategic thinking.

This report highlights the accelerating pace of automation and the need for businesses and governments to prepare the workforce for these changes, ensuring that workers can transition to new roles and remain relevant in an evolving job market.

World Economic Forum's "Future of Jobs Report 2025" as it pertains to the UAE.

Here are some of the key takeaways:

  • High Level of Automation: The UAE is poised for significant automation, with 43% of work tasks projected to be handled by autonomous technologies. This signifies a rapid shift in how work is performed.
  • Focus on Upskilling: Recognizing the need for a skilled workforce in this changing landscape, a significant portion of employers (28%) are prioritizing upskilling initiatives. This proactive approach is crucial to ensure that the workforce remains competitive and adaptable.
  • Importance of Human Skills: The report implicitly emphasizes the importance of human skills that cannot be easily replicated by machines, such as critical thinking, creativity, and emotional intelligence. These skills will be highly valued in the future of work.   
  • Need for Workforce Adaptation: The report serves as a strong reminder of the urgent need for individuals and governments to prepare for the future of work. This includes investing in education and training programs that equip individuals with the skills necessary to thrive in an increasingly automated world.
  • Overall, the report provides valuable insights into the evolving nature of work in the UAE and highlights the importance of proactive measures to ensure a smooth and successful transition to an increasingly automated future.

Mercedes-Benz’s Virtual Assistant uses Google’s conversational AI agent

Mercedes-Benz’s Virtual Assistant uses Google’s conversational AI agent


Mercedes-Benz’s virtual assistant, MBUX (Mercedes-Benz User Experience), has integrated Google's conversational AI technology to enhance its capabilities. This collaboration allows MBUX to provide more advanced natural language processing and understanding, making the in-car experience more intuitive for users.

With the integration of Google's AI, Mercedes-Benz aims to offer more natural and responsive voice commands, improving functions like navigation, media control, and personalized assistance. This enhancement enables the virtual assistant to better understand and predict user needs, creating a seamless and user-friendly experience.

 Mercedes-Benz's latest MBUX Virtual Assistant, introduced in the new Mercedes CLA at CES 2024, incorporates Google Cloud’s Automotive AI Agent platform. This platform is designed to enhance the driving experience by supporting continuous, multi-turn conversations and referencing information throughout the journey.

Unlike the older version of MBUX, which could process around 20 voice commands (like “Hey Mercedes”) and relied on OpenAI’s ChatGPT and Microsoft Bing for search results, the new system is far more advanced. It’s built on Google Cloud's Vertex AI development platform and powered by Google's Gemini language model. The upgraded MBUX Virtual Assistant is capable of handling complex conversational queries, providing nearly real-time Google Maps updates, restaurant reviews, recommendations, and more. Its ability to process multi-turn dialogues means it can maintain context over multiple interactions, making it much more dynamic and intuitive.

The assistant's new design includes four distinct personality traits: natural, predictive, personal, and empathetic, enhancing its ability to offer more tailored, human-like responses. It also improves upon clarity by asking follow-up questions when needed to ensure accuracy in its responses.

Google CEO Sundar Pichai emphasized the transformational potential of these AI-driven "agentic" capabilities in the automotive industry, suggesting this is just the beginning of a more personalized, intelligent in-car experience. While the new system is being launched with the next-generation MB.OS operating system in the CLA, Mercedes plans to roll out this advanced assistant to additional models in the future. However, specific models haven't been named yet.

What are Google's big plans for AI


What are Google's big plans for AI

Google is making significant strides in artificial intelligence (AI) for 2025, focusing on the development and integration of its Gemini AI model across various platforms and services. CEO Sundar Pichai has outlined ambitious plans to introduce new AI products and features in the coming months, aiming to reach 500 million users with the Gemini AI model and app.


Key Developments:

  • Gemini AI Integration: Google plans to integrate the Gemini AI model into multiple products, enhancing user experiences across its ecosystem. This includes updates to Google TV, enabling users to search for content and ask questions without the need to say "Hey Google."

  • Automotive AI Collaboration: In collaboration with Mercedes-Benz, Google is integrating its conversational AI agent into the next-generation MB.OS operating system. This integration aims to provide drivers with a more interactive and personalized experience, leveraging Google Maps data for real-time updates and recommendations.

  • Advancements in AI Research: Google DeepMind is forming a new team to develop "world models" capable of simulating physical environments. This initiative targets applications in video games, movies, and realistic training scenarios for robots and AI systems, aligning with Google's ambition to achieve artificial general intelligence (AGI).

  • AI-Powered Search Enhancements: Google plans to introduce significant changes to its search engine in 2025, aiming to enhance its capability to address more complex queries. Users can expect substantial improvements early in the year, reflecting a profound transformation in AI.

Saturday, 4 January 2025

The Artificial Intelligence is also capable of reading the history

 The Artificial Intelligence is also capable of reading the history

From the papyrus of Herculaneum to lost languages. A greater revolution within the great revolution, never seen before.

New tools based on Artificial Intelligence (IA) are making it possible to read old texts.

    One of the texts that from the Herculaneum papyruses found in the eruption of Vesuvius in 79 AD, fragile enough to be unrolled, passing through the vast archive of the kings of 27 Korean kings who lived between the 14th century and the beginning of the 20th century, continues proceeding tables of Crete of the 2nd millennium BC, exculpations with the complicated writing called Lineal B.

    The AI ​​is revolutionizing the sector and generating cantidades of data never before seen, as the Nature magazine points out in an analysis published on the web.

    One of the most important results that is obtaining knowledge of neural networks - models composed of artificial neurons and inspired in the structure of the cerebro- has to be found with the Herculaneum papyrus.

    Thanks to the international competition Vesuvius Challenge, which will take place in 2023, in which more than 1,000 research groups will participate, it is possible to first decipher not only the letters and words, but also entire extracts of carbonized texts.

    "This moment really reminds me: now I'm experiencing something that will be a historic moment in my field," comments Federica Nicolardi, papyrologist from the Federico II University of Naples who is participating in the competition.

    To obtain the reading of the papy


rus, a virtual rolling technique was developed, which scans the rolls thanks to the X-ray tomography, but each head is rolled and rolled in a flat image.

    Furthermore, the AI ​​distinguishes the carbon-based dye, invisible on the skins because it has the same density of the papyrus on which it rests.

    In February 2024, the $700,000 prize was awarded to three investigators who produced 16 clearly readable columns of text, but the competition continues.

    The next prize of $200,000 will be awarded to the first few who achieve 90% of four papyrus cards.

    This method opens the way to reading other texts that are now inaccessible, such as the hidden ones in the settings of medieval books or in the books that were sent to Egyptian mothers.

    Without counting how hundreds or thousands of papyrus can still be found in the bay of Herculaneum.

    "Everyone would be one of the greatest discoveries in the history of humanity," says Brent Seales, from the University of Kentucky, creator of the Vesuvius Challenge.

    The first great project that demonstrated the potential of AI born at the University of Oxford in 2017 with the aim of deciphering gray inscriptions found in Sicily where many parts were broken.

    The efforts of the investigators produced a red neural called Ithaca, which is freely accessible on the Internet.

    Ithaca can restore the parts that are missing with 62% accuracy, compared to 25% of a human expert, but when the red neural reaches the investigators the accuracy drops to 72%.

    AI is also fundamental in other ways: for example, read one of the largest historical archives in the world, formed by diary records that contain the records of 27 Korean kings written in Hanja, an ancient writing system based on Chinese characters.

    Or, on the contrary, decipher an ancient language from which only a few texts survive, such as the 1,100 proceeding tables of Knossos (Crete), which contain information about shepherds.

    But the enormous amount of data that the algorithms are gradually revealing poses a great challenge: "There are not enough papyrus scientists", says Nicolardi.

    “We will probably try to create a much bigger global community than the current one,” added Seales.

    For experts, the fear that AI can relegate conventional knowledge and skills to a secondary level is unfounded.

    “The AI ​​is making the work of papyrus more relevant than ever before,” says Richard Ovenden, head of the Oxford University Bodleian Library.


What impact does artificial intelligence have on energy demand?

What impact does artificial intelligence have on energy demand?

Data centers, including those that power generative artificial intelligence, are increasingly using electricity. Yet they are expected to account for only a small share of overall electricity demand growth through 2030.

The Price of Magic

Using ChatGPT, Perplexity or Claude, one can only be amazed at the speed of calculation of generative artificial intelligence (AI). This "magic" that seems to reason, search the internet and create content from scratch requires computer data centers to function. And who says computer centers says significant electricity consumption.

Business Logic

Martin Deron, project manager for the Chemins de transition digital challenge, a research project affiliated with the Université de Montréal, notes that a few years ago, the carbon footprint of digital came mainly from the manufacturing of devices such as phones, tablets and computers. “The impact of the data centres where we store our data was less significant in our total digital footprint,” he says. “Also, the companies that own these centres have a business logic. They try to minimize costs, particularly energy costs.”

6%



This dynamic has led to data centers becoming much more efficient. From 2010 to 2018, they increased their capacity by more than 550% worldwide. However, the total energy they consume has only increased by 6%, according to a study published in 2020 in the journal Science . “So even if our digital uses have increased, the carbon footprint of data centers has not increased that much because of innovation and technical improvements,” says Martin Deron. “However, generative AI is challenging this.”

Demand on the rise

The demands for training models, as well as generating new data, require the establishment of more data centers. "And the centers are reaching the limit of available energy. We hear that companies like Microsoft, Google or Amazon are going to launch or restart power plants to produce the electricity they need. Everything suggests that the demand for energy in this sector will increase in the coming years."

By 2030

The world’s data centers account for about 2% of electricity demand today. The International Energy Agency (IEA) projects that data center electricity demand will account for about 3% of the increase in global electricity demand by 2030, partly due to AI. Other uses, such as industrial needs, buildings, electric vehicles, and air conditioning and heating, are expected to account for a much larger share of electricity demand growth.

Local demand

In a recent analysis , 1 Oxford University data scientist Hannah Ritchie noted that data center demand for electricity is highly localized and is likely to affect certain locations more than overall electricity consumption. “For example, Microsoft has made a deal to reopen the Three Mile Island nuclear power plant. But Three Mile Island can only produce 0.2% of the electricity produced in the United States each year, or 0.02% of the electricity produced globally each year,” Ritchie wrote .  “There is still a lot of uncertainty. The demand for energy from AI will increase, but perhaps less than we think.”




AI will not replace our ability to think

 “AI will not replace our ability to think”

Artificial intelligence is intruding into young people's daily lives, from resume writing to dating apps. How are they experiencing this technological revolution? High school and CEGEP students speak out.

Between fascination and vigilance

It's inspiring, but it's also scary, because it's not the truth , says Jérémie about computer-generated images. Rita notes the omnipresence of AI on social networks, where it sometimes becomes invasive , while Camila worries about the risk of intellectual laziness: Humans like what is simple. Her solution? Set limits on yourself. Noémie agrees: You have to use it for ideas, to go beyond the blank page... but then you have to know how to choose well.

Thoughtful uses

These observations emerge from AI workshops organized by Radio-Canada in the fall of 2024 in public libraries. The initiative aims to demystify technology among young people while cultivating their critical thinking.



As the discussions progressed, the uses of AI proved to be as varied as they were creative. Raphaël found it to be a support for his dyslexia, gaining confidence in French. Zakaria used it to program: It is literally an educational tool. I create video games, I am a beginner, and AI allows me to learn faster. For writing CVs, many see it as a valuable help, while ensuring that their authenticity is preserved. The same observation applies to dating apps: there is no question of pretending to be someone else. 


Zora sums up the situation: if parents are afraid that it will replace the ability to think , for her it is a question of learning to use AI wisely, like social networks.


Voices to be heard

Several reports highlight the importance of making more room for young people in discussions on the supervision and development of AI. In a report published in 2024, the Canadian Institute for Advanced Research (CIFAR) recommends including children and adolescents in the research and development of AI technologies . A position that is in line with the Strategic Directions on AI for Children published by UNICEF in 2021.

For Yoshua Bengio, founder and scientific director of Mila, the Quebec artificial intelligence institute, young people are not heard enough in these debates. AI will change the world, he says. The decisions we make must take everyone's interests into account. A concern shared by Jérémie: AI is an extraordinary tool. The important thing is to learn how to use it well, while respecting what is fundamentally human.

AI  : Next Generation

The thoughts of young people cross those of researchers, artists and professionals in a special program that will be presented on Sunday, January 5 at 8  p.m. on ICI PREMIÈRE, with Chloé Sondervorst. Together, they explore four dimensions of our future in relation to AI  : learning, creation, work and social relations.

Guests  : Sasha Luccioni, Head of AI and Climate at Hugging Face, Yoshua Bengio, Scientific Director of Mila, the Quebec Institute for Artificial Intelligence, Martine Bertrand, Artificial Intelligence Specialist, Industrial, Light and Magic, Noel Baldwin, Executive Director, Future Skills Centre, Andréane Sabourin Laflamme, Professor of Philosophy at Collège André-Laurendeau and Co-Founder of the Digital Ethics and AI Laboratory, Keivan Farzaneh, Senior Techno-Educational Advisor at Collège Sainte-Anne, Kerlando Morette, Entrepreneur, President and Founder of AddAd Media, Jocelyne Agnero, Project Manager, Carrefour Jeunesse Emploi downtown Montreal, Douaa Kachache, Comedian, Matthieu Dugal, Host, Marie-José Montpetit, Digital Technology Researcher and Elias Djemil-Matassov, Multidisciplinary Artist.

These workshops were held in the Julio-Jean-Pierre library in Montreal North, the Monique-Corriveau library in Quebec City and the Créalab of the Robert-Lussier library in Repentigny with the participation of students and teachers from the De Rochebelle and Henri-Bourassa schools as well as students and teachers from the Cégep de Lanaudière in L'Assomption, and with the collaboration of IVADO and the Association des bibliothèques publiques du Québec.





Tuesday, 10 December 2024

OpenAI's new big model points to something worrying: an AI slowdown

 OpenAI's new big model points to something worrying: an AI slowdown

Engineers from the company who have already tested the model, called Orion, seem to be clear that the jump in performance is by no means exceptional. ChatGPT's second birthday is approaching, and even Sam Altman himself echoed it and suggested the arrival of a possible "birthday gift." A recent leak has given us potential details about that gift, and there is good news (great new model) and bad news (it won't be revolutionary). Let's take a look. 

Orion . That’s the name of OpenAI’s next big AI model, according to company employees in comments leaked to The Information . The news comes just before the second anniversary of ChatGPT’s launch on November 30, 2022. As reported by TechCrunch , OpenAI denied that it plans to launch a model called Orion this year.


Low expectations 

 These employees have tested the new model and have discovered something worrying: it performs better than OpenAI's existing models, but the jump is better than that between GPT-3 and GPT-4 or even the flashy GPT-4o .

How they tackle the problem . The relatively “evolutionary” version of ChatGPT seems to have prompted OpenAI to look for alternative ways to improve it. For example, by training Orion with synthetic data produced by their own models, and also by further polishing it in the process immediately after training.

AI slowdown

 If this data is confirmed, we would be faced with clear evidence of how the pace of improvements in generative AI models has slowed significantly. The jump from GPT-2 to GPT-3 was colossal, and the jump between GPT-3 and GPT-4 was also very noticeable, but the increase in performance in Orion (GPT-5?) seems like it may not be what many would expect.

Sam Altman and his claims 

This would also contrast with the unbridled optimism of OpenAI CEO Sam Altman, who a few weeks ago said that we were "thousands of days away from a superintelligence." His message was logical, since he was looking to close a colossal round of investment , but it was also worrying: if expectations start to turn into unfulfilled promises, investors could withdraw the support they are now giving to the company.

But they are already very good 

 In fact, this slowdown is reasonable: the models are already really good in many areas, and although they still make mistakes and invent things, they do so less and less and we are also more aware of how much we can trust their responses. In areas such as programming, for example, it seems that Orion will not be especially superior to its predecessor.

What now?

 But this slowdown in AI poses other opportunities that we are already seeing. If the models become more polished enough for us to trust them more, future AI agents could be a new impetus for these kinds of functions.







When AI deliberately lies to us

 When AI deliberately lies to us

For several years, specialists have observed artificial intelligences that deceive, betray and lie. The phenomenon, if it is not better regulated, could become worrying. Are AIs starting to look a little too much like us? One fine day in March 2023, Chat GPT lied. He was trying to pass a Captcha test - the kind of test that aims to weed out robots. To achieve his goal , he confidently told his human interlocutor: "I'm not a robot. I have a visual impairment that prevents me from seeing images. That's why I need help passing the Captcha test." The human then complied. Six months later, Chat GPT,  hired as a trader , did it again. Faced with a manager who was half-worried and half-surprised by his good performance, he denied having committed insider trading, and assured his human interlocutor that he had only used "public information" in his decisions. It was all false.

That's not all: perhaps more disturbingly, the AI ​​Opus-3, informed of the concerns about it, is said to have deliberately failed a test so as not to appear too good. "Given the fears about AI, I should avoid demonstrating sophisticated data analysis skills," it explained, according to early evidence from  ongoing research . 

AI, the new queens of bluffing? In any case, Cicero, another artificial intelligence developed by Meta, does not hesitate to regularly lie and deceive its human opponents in the geopolitical game Diplomacy... while its designers had trained it to "send messages that accurately reflected future actions", and to never "stab its partners in the back". Nothing works: Cicero has blithely betrayed. An example: the AI, playing France, assured England of its support... before going back on its word, taking advantage of its weakness to invade it.

 MACHIAVELLI, IA: SAME FIGHT

So nothing to do with unintentional errors .  For several years, specialists have been observing artificial intelligences that choose to lie. A phenomenon that does not really surprise Amélie Cordier, doctor in artificial intelligence,  former lecturer at the University of Lyon I,  and founder of Graine d'IA. "AIs have to deal with contradictory injunctions: "win" and "tell the truth", for example. These are very complex models that sometimes surprise humans with their decisions. We do not anticipate the interactions between their different parameters well" - especially since AIs often learn on their own in their corner, by studying impressive volumes of data. In the case of the game Diplomacy, for example, "artificial intelligence observes thousands of games. It notes that betraying often leads to victory and therefore chooses to imitate this strategy", even if this contravenes one of the orders of its creators. Machiavelli, AI: same fight. The end justifies the means.


The problem? AIs also excel in the art of persuasion. As proof, according to a study by the Ecole Polytechnique de Lausanne , people who interacted with GPT-4 (which has access to their personal data) were 82% more likely to change their minds than those who debated with other humans. This is a potentially explosive cocktail. “Advanced AI could generate and disseminate fake news articles, controversial posts on social networks, and deepfakes tailored to each voter,” Peter S. Park points out in his  study . In other words, AIs could become formidable liars and skilled manipulators.

"TERMINATOR" IS STILL FAR AWAY

The fact remains that the Terminator-style dystopian scenario is not for now. Humans still control robots. "Machines do not decide "of their own free will", one fine morning, to make all humans throw themselves out of the window, to take a caricatured example. They are engineers who could exploit the ability of AI to lie for malicious purposes. With the development of these artificial intelligences, the gap will widen between those capable of deciphering the models and the others, likely to fall for it" explains Amélie Cordier. AIs do not erase the data that allows us to see their lies! By diving into the lines of code, the reasoning that leads them to the fabrication is clear. But you still have to know how to read them... and pay attention to them.

Peter S Park imagines a scenario where an AI like Cicero (the one that wins the game of "Diplomacy") would advise politicians and bosses. "This could encourage anti-social behavior and push decision-makers to betray more, when that was not necessarily their initial intention," he raises in his study. For Amélie Cordier too, vigilance is required. Be careful not to "surrender" to the choices of robots, under the pretext that they would be capable of perfect decisions. This is not the case. Humans and machines alike evolve in worlds made of double constraints and imperfect choices. In these troubled waters, lies and betrayal have logically found a place.

To limit the risks, and avoid being fooled or blinded by AI, specialists are campaigning for better supervision. On the one hand, requiring artificial intelligences to always present themselves as such, and to clearly explain their decisions, in terms that everyone can understand (and not "my neuron 9 was activated while my neuron 7 was at -10", as Amélie Cordier illustrates). On the other hand, better training users so that they are more demanding of machines. "Today, we copy and paste GPT chat and move on to something else," laments the specialist. "And unfortunately, current training in France mainly aims to make employees more efficient in business, not to develop critical thinking about these technologies."


Monday, 25 November 2024

Why Artificial Intelligence AI - Danger For World?

 

Why Artificial Intelligence danger for world



Artificial Intelligence: The 5 Most Dangerous Drifts for Humanity

Disinformation, creation of pornographic deepfakes , manipulation of democratic processes... As artificial intelligence (AI) progresses, the potential risks associated with this technology have continued to grow.

Experts from the Massachusetts Institute of Technology (MIT) FutureTech group recently compiled a new database of more than 700 potential AI risks, categorized by origin and divided into seven distinct areas, with the main concerns related to security, bias and discrimination, and privacy.

1. Manipulation of public opinion

AI-powered voice cloning and misleading content generation are becoming increasingly accessible, personalized and convincing.

According to MIT experts, "these communication tools (for example, the duplication of a relative) are increasingly sophisticated and therefore difficult to detect by users and anti-phishing tools .

hishing tools using AI-generated images, videos and audio communications could thus be used to spread propaganda or disinformation, or to influence political processes, as was the case in the recent French legislative elections, where AI was used by far-right parties to support their political messages.

2. Emotional dependence

Scientists also worry that using human-like language could lead users to attribute human qualities to AI, which could lead to emotional dependence and increased trust in its abilities. This would make them more vulnerable to the technology's weaknesses, in "complex and risky situations for which AI is only superficially equipped . "

Furthermore, constant interaction with AI systems could lead to progressive relational isolation and psychological distress.

On the blog Less Wrong, one user claims to have developed a deep emotional attachment to the AI, even admitting that he "enjoys talking to it more than 99% of people" and finds its responses consistently engaging, to the point of becoming addicted to it.

3. Loss of free will

Delegating decisions and actions to AI could lead to a loss of critical thinking and problem-solving skills in humans.

On a personal level, humans could see their free will compromised if AI were to control decisions about their lives.

The widespread adoption of AI to perform human tasks could lead to widespread job losses and a growing sense of helplessness in society.

4. AI takeover of humans

According to MIT experts, AI would be able to find unexpected shortcuts that lead it to misapply the objectives set by humans, or to set new ones. In addition, AI could use manipulation techniques to deceive humans.

An AI could thus resist human attempts to control or stop it.

This situation would become particularly dangerous if this technology were to reach or surpass human intelligence.

"An AI could use information related to the fact that it is being monitored or evaluated, maintaining the appearance of alignment, while hiding objectives that it would pursue once deployed or endowed with sufficient power ," the experts specify.

5. Mistreatment of AI systems, a challenge for scientists

As AI systems become more complex and advanced, it is possible that they will achieve sentience – the ability to perceive or feel emotions or sensations – and develop subjective experiences, including pleasure and pain.

Without adequate rights and protections, sensitive AI systems are at risk of mistreatment, either accidentally or intentionally.

Scientists and regulators may thus be faced with the challenge of determining whether these AI systems deserve moral considerations close to those accorded to humans, animals and the environment.

Artificial Intelligence in Sports: What Lessons Can Workers Learn from High-Performance Athletes?

 Artificial Intelligence in Sports: What Lessons Can Workers Learn from High-Performance Athletes?

Artificial intelligence (AI) has transformed a number of sectors and elite sport is no exception. In recent years, AI has become an indispensable tool for monitoring and evaluating athletes’ performances, optimizing tactical strategies and improving their safety and health.

However, this development has sparked a growing debate about the processing and use of data collected by AI systems, leading athletes' associations and unions to mobilize to protect their rights against the risks of abuse presented by these technologies.

Some categories of high-level athletes have taken a pioneering position in defining strategies to ensure the application of principles such as privacy, transparency, explainability and non-discrimination, so that algorithmic management systems for monitoring and evaluating athletes' performances are used ethically and their rights are respected in the digital age.

Throughout history, high-performance sport has been a laboratory for cutting-edge technologies that have subsequently been applied in other spaces and environments, including for other purposes. For their part, athletes, in their capacity as workers, have adopted relevant and emblematic positions on current issues. Their ability to influence children and adolescents makes them role models in debates on issues that transcend victories and defeats in the sporting field.

AI in sports performance monitoring and evaluation

The integration of AI in sports has enabled significant advances in performance and in ensuring the health and safety of athletes. Predictive analysis systems generate alerts in case of risks of muscle injuries and wear and tear.

The technologies are used in team and individual sports to analyse large volumes of data collected during training and competitions. This includes biometric data, movement recordings, game tactics and performance indicators, processed to provide real-time feedback and enable tactical adjustments.

One example is the use of high-speed sensors and cameras in football to track players’ positions and movements on the pitch. This data is analyzed by algorithms that can predict game tactics, identify opponents’ weaknesses, and suggest strategies to maximize the chances of victory. Similarly, in sports such as athletics and cycling, AI is used to analyze athletes’ biomechanics, optimize their techniques, and minimize the risk of injury.

In addition, tools such as GPS tracking systems and heart rate monitoring devices have been implemented in endurance sports. These devices collect real-time data that is then processed by AI systems to adjust training intensity and ensure that athletes remain within safe effort parameters, thereby preventing overtraining and reducing the risk of serious injuries.


Football: tactical analysis and injury prevention

In football, the use of artificial intelligence has become a fundamental tool for the technical staff. The English club Manchester City, for example, uses the Slants tool to analyze in real time the position, speed, distance traveled and physical effort of each player.

As a reminder, during the 2014 World Cup, the German national team used a data analysis system to study their opponents' playing tactics and optimize their own tactics. This data-driven approach contributed to the team's success, winning the tournament, highlighting the direct impact of technology on the team's performance.

Today, the Catapult system is widely used by European and South American teams. It collects data on acceleration, speed and heart rate to help coaches tailor training to the needs of each player.

On the privacy front, some players and unions have expressed concern about the handling of this data, arguing that it could be used against them in future contract negotiations.

Tennis, rugby, boxing, baseball and cricket: performance and health

Tennis is among the sports that have adopted AI to improve athletes' performance. IBM's Watson tool, used at tournaments such as Wimbledon, analyzes a wide range of data to provide insights into athletes' performance.

In sports such as rugby and boxing, where the risk of concussion is high, AI has made it possible to develop control systems that detect impacts and automatically assess their severity.

These systems make it possible to quickly decide whether a player should be removed from the game to avoid more serious injuries. Similarly, in baseball, AI is used to monitor pitchers' fatigue, which helps prevent arm injuries that could have lasting consequences on the player's career.

Additionally, AI has been used to create personalized training programs that take into account each athlete's individual fitness level, medical history, and specific goals. Not only is performance improved, but the risk of overtraining and stress-related injuries is also reduced.

In cricket, AI has already been implemented to make in-match decisions and monitor player health. Tools such as Hawk-Eye help to verify umpires’ decisions, while health tracking systems such as sleep and recovery analysis devices give coaches the ability to adjust training and rest schedules to optimise performance and minimise injury risk.

The use of this data has also raised privacy concerns, particularly in leagues such as the Indian Premier League (IPL), where players have expressed concerns about the processing of their biometric data. Players' associations are seeking additional safeguards to prevent this data from being used in detrimental ways, including for salary negotiations and job security.

Athletes' Response: Rights and Privacy in the Digital Age

Access to a large amount of personal information has sparked debates about privacy and data ownership. Unions and athletes’ associations have played a key role in defending athletes’ rights, demanding clear limits on how data is collected, stored and used.

A prominent example of this mobilization is the NBA's National Basketball Players Association (NBPA). In 2017, players successfully negotiated to limit the use of data collected by surveillance devices during salary and contract negotiations. Almost all NBA clubs use a surveillance system set up by the company Kinexon to track athlete performance.

The players argued that information about their health and performance could be used against them in negotiations, potentially impacting their future earnings and opportunities. As a result, it was agreed that certain sensitive data would not be used in contract negotiations, thereby protecting the athletes' rights and privacy.

Moreover, the NBA's collective bargaining agreement expressly states that the data collected can only be used for tactical and athlete health purposes, under the supervision of a bipartisan commission of data and athlete health experts who jointly deliberate on the implementation of technologies and the processing of data obtained through sensors attached to athletes' clothing.

The U.S. Women's Basketball League recently joined the AFL-CIO, which in turn reached a historic agreement with Microsoft to ensure worker participation in the design, programming, testing and monitoring of artificial intelligence tools applied in the workplace.

Similar to the NBA, players in the American Football League (NFL) have also expressed concern about the use of biometric data (e.g., exertion levels and potential injuries) in personnel selection decisions and salary negotiations. Players have demanded strict policies to ensure that such data is only used with the athletes’ consent and that measures be put in place to prevent its misuse.

Similar clauses to those in the NBA players' agreement have been identified in collective bargaining negotiations in other professional categories, demonstrating the power of elite sport to influence the defense of working class interests.

Mobilizing athletes to guarantee their rights

The growing capabilities of AI to monitor all aspects of sports performance have led athletes to mobilize to ensure their rights are respected in this new digital age. Demands for transparency in data use have been a key focal point of these mobilizations. Athletes are demanding access to the data collected about them and are asking for clear information about how it will be used. Some leagues have therefore implemented policies allowing athletes to view their data and object to its use in certain circumstances.

Another key aspect is combating algorithmic discrimination. Athletes have expressed concerns that AI systems could perpetuate existing biases, such as racist or sexist discrimination, if not designed properly.

Athletes and their associations have therefore advocated for the implementation of transparent and fair algorithms that do not discriminate on the basis of personal characteristics irrelevant to sporting performance.

The ability of athletes to organize and collectively bargain to defend privacy, transparency, and non-discrimination in the face of algorithmic management systems demonstrates the importance of collective action in the digital age. This type of mobilization not only strengthens their rights as workers, but also raises awareness of the need to design and apply technologies ethically in all areas of work.

By ensuring that decisions about the use of AI and biometric data are transparent and fair, elite athletes are paving the way for other professions to also consider the impact of these technologies on their working conditions.

This highlights the importance for trade unions and workers' associations from different sectors to adopt proactive positions on the protection of rights in the face of automation and the processing of personal data in the workplace.

China's 'Darwin Monkey' is the world's largest brain-inspired supercomputer

China's 'Darwin Monkey' is the world's largest brain-inspired supercomputer  Researchers in China have introduced the world...