Imagen que representa los dilemas éticos de la Inteligencia Artificial Generativa (IAG). Un fondo oscuro lleno de símbolos digitales y líneas de código en tonos azulados refleja la complejidad y el carácter opaco de los modelos de IAG. Al centro, una figura abstracta simboliza el potencial de la IAG de influir en decisiones humanas y manipular información, destacando preocupaciones éticas como la desinformación, el sesgo y la privacidad. El efecto visual de luz y sombra transmite un tono de incertidumbre, ilustrando los riesgos de seguridad, la dependencia excesiva en la tecnología y las consecuencias de su mal uso en ámbitos como la academia y el empleo.

GAI and ethics: is there nothing “sacred”?

The ethical dilemmas of the GAI 

The ethical concerns surrounding Generative Artificial Intelligence (GAI) are numerous and complex, since this technology, like any tool can be used according to “good practices” and standards of ethical conduct or not, but in its case, GAI, has a high potential to profoundly influence how data and information are generated, interpreted and communicated, producing significant impacts on the lives of people, organizations and society.

In this way, the possible “misuse” of GAI generates, among others, the following concerns:

  • Data bias and discriminatory outcomes: GAI is often trained on large amounts of data that may reflect historical, cultural, or societal biases. This can lead to models reproducing or amplifying these biases, resulting in discriminatory responses or outcomes. For example, a model trained on historical texts could replicate gender, race, or religious biases, affecting the recommendations, decisions, or responses generated.
  • Disinformation and generation of false content: GAI’s ability to create text, images, videos or audios indistinguishable from the originals raises concerns about the misuse of this technology to create disinformation or “deepfakes“. These applications can be harmful to the public, undermine trust in information sources and affect the credibility of institutions.
  • Privacy and misuse of personal data: Some GAI models may inadvertently reveal sensitive or personal information from the data they were trained on, which is a significant risk in regulated sectors such as healthcare or financial services. In addition, training these models on data containing personal information raises questions about the consent and ethical use of that data, especially when the scope and purpose of the model are not completely clear to the affected individuals.
  • Job reduction and labor inequality: As GAI is used to automate tasks that were previously performed by people, there are concerns that this technology could displace workers, especially in industries where repetitive or pattern-based tasks are common. This could increase labor inequality, leaving some workers with fewer employment opportunities or in need of retraining without a clear alternative.
  • Lack of transparency and opacity of models: Generative models, especially more complex ones, can act as “black boxes,” where it is difficult for users to understand how results are generated. This lack of transparency raises concerns about accountability, as it can be difficult to attribute errors or biases to specific elements of the model, creating barriers to correctness and oversight.
  • Impact on autonomy and decision manipulation: GAI applications that generate content tailored to each user have the potential to influence their decisions and behaviors in subtle ways. This raises ethical questions about individual autonomy, as users can be inadvertently influenced in aspects as varied as buying products, consuming news or even making political decisions.
  • Over-reliance on technology: Blind trust in GAI can lead to individuals and organizations making decisions based on automatically generated outcomes, without questioning their validity or context. This could result in the delegation of complex decisions to generative models without sufficient human oversight, increasing the risk of errors and unintended consequences.
  • Risks to human creativity and originality: In creative contexts, GAI can replace tasks that previously required human intervention, raising concerns about the role of creativity and originality. The automatic generation of content can lead to a saturation of “derivative material” that impacts the diversity and richness of cultural and creative production.
  • Security and malicious use: In addition to the risk of misinformation, GAI can be employed in illicit activities, such as the creation of malware, scams, or social engineering attacks, due to its ability to generate personalized messages or replicate human patterns. This type of malicious use poses a threat to public and business safety.

To address these ethical concerns, it is necessary to establish guidelines for GAI’s use and supervision, focusing on transparency, accountability and fairness. It is also important for organizations to promote a human-centric approach to AI, prioritizing social well-being and sustainability in the development and implementation of generative models. This will minimise the associated risks and maximise the positive impact of GAI on society.

The European Union is leading legislation to prevent these unwanted effects of Artificial Intelligence with its Artificial Intelligence Act of mid-2024, which most countries in the world are analysing to produce their own legislation and regulation.

The most advanced consulting and professional services companies also have specific initiatives on ethics in artificial intelligence, and, as is the case of Quanam in Uruguay, they are incorporating provisions in this regard in the new versions of their Compliance Program and in their Code of Ethics and Professional Conduct.

But what happened this semester in the Spanish academic world seems to have crossed limits that seemed somehow “sacred” to most citizens.

The fraud of a Rector

The University of Salamanca is a Spanish public university based in the city of Salamanca, in whose municipality most of its centers are located, although it also has centers in the cities of Zamora, Ávila, Béjar and in the town of Villamayor. It is the oldest university in operation in Spain and the Hispanic world and the fourth oldest  in Europe. Many of the readers will have heard the quote “lo que natura non da, Salamanca non presta” referring to the fact that universities “do not cut ears”, but in any case, the history of more than eight centuries of this venerable house of studies generates a unanimous feeling of respect for such a long history at the service of science and knowledge.

At the end of September 2024, the Spanish newspaper El País published an article entitled “The fraud of a rector“, reproduced in October by the WCA (World Compliance Association) in its weekly Newsletter of Friday 25/October/2024.

A report prepared for the Spanish Committee on Research Ethics on the case of the rector of the University of Salamanca, Juan Manuel Corchado, is so far the latest episode of a scientific fraud. The investigations published by the newspaper El País about the rector’s practices set off alarm months ago about the credibility of what his website collected with telltale triumphalism.

In theory, Corchado was one of the most cited computer scientists in the world for his academic works. The reality is that a large part of these mentions respond to a reprehensible practice: the creation of a kind of quotation cartel, that is, a concerted system for the conjured professors to cite themselves and thus increase the appearance of relevance and impact of their work. It is another variant of a classic practice that the Spanish university would do well to banish: the nefarious “today for you, tomorrow for me“.

The report for the Ethics Committee not only concludes categorically the “systematic manipulation” of the rector’s curriculum, but also leaves in a bad place the exculpatory report that the University of Salamanca itself commissioned from a professor colleague of the rector himself, although from another university. In its 17 pages it omits substantial information and borders on the fraudulent practices of a professor who has used his knowledge of Artificial Intelligence to fatten his alleged merits in a way, in effect, artificial. The Confederation of Scientific Societies of Spain also issued a statement of condemnation in June that included the suggestion of new elections at USAL to repair the reputational damage, as also demanded by half of the professors of that university.

Being an expert in Artificial Intelligence allowed Corchado to accumulate references in scientific repositories, invent fake profiles to continue quoting himself and agree with others on his own appointments. It is not the first case of a CV inflated with illicit methods, but the striking nature of this one can serve to improve the functioning of citation rankings, which are often a very insufficient basis for determining the destination of public funding.

The obsessive race for references – almost always without reading the evaluated works – has this episode in Spain as a symptom of a larger problem: the need to guarantee the solvency of the systems for evaluating academic research. Such an obsession can end up discrediting not only the leader of a team but also his collaborators. The embarrassment of the Corchado case, which affects the highest authority of a public university, forces us to rethink the operation of the rankings so that the merits that are examined are not based on the quantity of citations but on the quality of what is cited.

The case of the head of the University of Salamanca should serve to rethink the systems for evaluating academic merits, and it is a new example, unthinkable at the beginning of the boom in Artificial Intelligence, of the ethical risks and fraud derived from the misuse of such a powerful tool.

José C. Nordmann

SME in Digital Transformation
Member of the AUC (Uruguayan Association of Compliance)
Associate Member of the WCA (World Compliance Association)
Associate Member of CUGO (Uruguayan Circle for the Best Governance of Organizations)
Member of the World Council for a Safer Planet
Miembro de ACFE (Association of Certified Fraud Examiners)
Miembro del i2 Group Worldwide Advisory Board