Freedom, rights and security: we need a global agreement on technology”

Cyber 4.0, April 17, 2024, at the Cybercrime Conference

Press Release*

  • “The EU risks being isolated, while in the rest of the world disparate regulations can put political life and fundamental rights at risk,” says Matteo Lucchetti, director of Cyber 4.0 at the Cyber Crime Conference in Rome
  • Deepfakes, identity theft, algorithms to create malware and launch 0-day attacks: artificial intelligence gives cyber criminals increasingly effective tools
  • Serious cybersecurity incidents increased by 12 percent last year, worldwide.
  • Generative Artificial Intelligence market volume will increase tenfold by 2030

Rome, April 17, 2024 – “The world is increasingly divided on opposing blocs, including on technology, between those who seek to promote its development strongly anchored in respect for human rights and those who push the accelerator of new technological paradigms without assessing their ethical and regulatory impacts: in a year when 40 percent of the world’s population will go to the polls, without effective international regulation and with the relentless development of innovation, the malicious use of AI for new forms of cybercrime risks becoming a threat to political and democratic life, capable of influencing elections and decisions in individual countries. Fundamental rights are at risk everywhere , and we must work to ensure that the EU does not remain isolated in defending them.” .

This is the warning that Matteo Lucchetti, director of Cyber 4.0, the National Highly Specialized Competence Center for Cybersecurity, issued at the opening of the Cyber Crime Conference, being held today and tomorrow at theAuditorium della Tecnica in Rome. The event, organized by ICT Security Magazine and now in its 12th edition, provides an opportunity for updates on cyber risk landscapes and a venue for discussion among conference speakers, the exhibition area and a qualified audience of ICT companies, legal and cyber diplomacy professionals, law enforcement, judicial authorities, universities and research centers.

The year 2023 closed with alarming data about cybersecurity , and all major authorities in the field confirm the growing trends of cyber attacks. According to the latest Clusit report, with 2,779 serious incidents analyzed globally, the trend was +12% over 2022. And our country appears to be increasingly in the crosshairs: last year 11% of global mapped serious attacks went to Italy (it was 7.6% in 2022), a figure up 65% over 2022. 47% of total attacks surveyed in Italy since 2019 occurred in 2023.

In this scenario, Artificial Intelligence offers additional variety and complexity to the cyber attack modes that can be expected from the “imagination” of hackers and other “black hats,” perhaps in the service of criminal organizations. “AI enables the development of new forms of cybercrime that are increasingly personalized and effective,sophisticated algorithms to do coding and to spread increasingly adaptive malware, break down the time to launch ‘0-day’ attacks, to the use of multimedia deepfakes, from identity theft-based fraud against individuals or companies, to disinformation campaigns during national elections, of which we have already seen glaring examples.”

The international debate on the subject is increasingly heated, as evidenced by new regulations aimed at protecting personal data and strategic digital infrastructure from new threats, as well as managing the central role assumed by Artificial Intelligence on both the attack front and defense tools. The European Parliament, for example, passed the AI Act regulation last March, which will come fully into force in two years, bringing adjustments on risk analysis and on allowed or prohibited AI technologies.

“However, the EU risks remaining a minor player and isolated among state actors with unscrupulous ambitions in technology competition, capable of getting in the way of broad, effective and cogent adoption of shared content in long and complex international negotiations,” Lucchetti explains. What is happening at the Council of Europe offers the leitmotif: on the one hand, many countries have agreed on an international Framework Convention on Artificial Intelligence, Human Rights, and the Rule of Law, but on the other hand, the agreement leaves wide latitude for individual countries to decide how and to what extent to impose obligations on the private sector that develops algorithms and applications of them. The text, which was created to be legally binding on human rights without harming innovation, is therefore in danger of being underutilized even in those contexts that will decide to adopt it, beyond the important outreach on ethical issues.”

Artificial Intelligence now appears to be a host destined to remain in everyone’s lives, jobs and homes thanks to a digital infrastructure that is now mature in terms of connectivity, computational capacity and information base, with all the opportunities and risks involved. Technological and commercial progress promises applications across all economic and social domains.

The estimated market size by 2024 is $184 billion, and with an aggregate annual growth rate of 28.26 percent, the projected market volume by 2030 is $826.7 billion, considering all segments of the industry: robotics, computer vision, machine learning, sensing and automation, and natural language processing. The generative AI segment, in particular, is expected to increase almost tenfold, from the current $36 billion to $356.1 billion by the end of the decade (Statista data).

“And the backlash over fundamental rights also underlies the unfinished negotiations at the United Nations on the international cybercrime treaty ,” Lucchetti notes.. A point of agreement has been sought for years to overcome the critical issues that part of the world expresses on the Budapest Convention, but the feeling is that common positions that would lead to truly effective global regulation cannot be identified: even if-as expected-a final treaty text could be finalized in July of this year, the risk of poor adoption is high and we may still be faced for years with a division between countries, in which there will continue to be nations that offer ‘safe havens’ to cyber criminals and in which the space of privacy rights and freedom of expression will inevitably be increasingly compressed, in favor–also, but not only–of a supposed advantage in technological evolution.”