**
Anila Bisha, a celebrated actress in Albania, has initiated legal action against the Albanian government following the controversial launch of Diella, touted as the world’s first artificial intelligence minister. Bisha claims the government has appropriated her likeness and voice without her consent, raising significant concerns regarding personal data rights and the ethical implications of using human images in digital politics.
The Controversial AI Minister
In a bold move last year, Albanian Prime Minister Edi Rama introduced Diella, an AI minister designed to ensure transparency and integrity in public procurement. Diella, who notably “gave birth” to over 80 AI offspring to assist parliamentary members, has become a focal point in discussions surrounding the intersection of technology and governance. However, the revelation that the AI’s visual and auditory elements are derived from Bisha has sparked outrage, with the actress asserting that her image has been misappropriated for political purposes.
Bisha, a prominent figure in the Albanian film and theatre scene, filed her complaint with an administrative court in Tirana, citing a clear violation of her personal rights. “This is the first legal move to prevent the abuse of Anila’s image,” stated her lawyer, Aranit Roshi. The Albanian government has yet to respond to the allegations.
Consent and Miscommunication
While Bisha acknowledges that she previously consented to the use of her voice and image for the government’s e-Albania platform, she asserts that she was never informed about the application of her likeness to an AI minister. “It was surprising when I heard the prime minister declare it. I asked how this could happen without my knowledge, without anyone asking me if I wanted my image to be used or not,” she revealed in an interview with the Associated Press.
The case draws parallels to previous controversies involving the use of celebrity voices in AI technology. In 2024, OpenAI faced backlash when Scarlett Johansson objected to a voice in its ChatGPT platform that bore a striking resemblance to her own, prompting the company to remove it despite claiming it was not her voice.
Legal Proceedings and Future Actions
Frustrated by the lack of communication from the government since Diella’s announcement in September 2025, Bisha is now pursuing a temporary injunction to halt the use of her image immediately. Her legal team plans to expand the case into a formal lawsuit seeking damages for the unauthorized use of her identity.
Bisha firmly believes that one’s identity cannot be commodified without consent. “One cannot take away one’s identity and do with it whatever they want,” she asserted, indicating her determination to protect her personal representation against governmental exploitation.
Why it Matters
This case highlights the growing tensions between the advancement of artificial intelligence and the protection of individual rights in the digital age. As governments increasingly incorporate AI into public services, the ethical implications of using human likenesses without consent must be addressed. Bisha’s legal challenge could set a precedent for how personal data is managed and protected, especially in an era where technology is rapidly redefining the boundaries of identity and representation. The outcome may well influence not only the future of AI in governance but also the broader conversation about personal rights in an increasingly digital world.