In 2025, the EU adopted new rules on driving licences with the aim of reducing the number of accidents on EU roads. The rules introduce:
In addition, the rules harmonise a number of aspects:
a. Validity
Driving licences will be valid for 15 years for motorcycles and cars. EU countries can reduce this period to 10 years if the licence can be used as a national ID. Truck and bus licences need to be renewed every five years. EU countries can shorten the validity of driving licences of drivers who are 65 years or older.
b. Physical and mental fitness to drive
Before their first licence, a driver has to pass a medical and eyesight check. For car and motorcycle licences, EU countries can decide to replace the medical check with a self-assessment.
In terms of driving under the influence of alcohol, EU countries must have stricter rules or sanctions for novice drivers than for experienced drivers. They can also decide to have a zero-tolerance policy on alcohol and drugs (i.e. banning consumption for all drivers).
c. Licences from non-EU countries
Licences from non-EU countries with road safety standards similar to the EU’s can be exchanged for a licence that is valid throughout the EU. Together with EU countries, the Commission will decide to which non-EU countries this applies.
d. Licences in EU countries of citizenship
The new rules also allow citizens living abroad to get their first category B (passenger car) licence in their EU country of citizenship. This applies if the EU country they live in does not provide interpretation or translation in the citizen’s EU language for the practical or theoretical tests.
e. New driving test requirements
Theory and practical tests must place more emphasis on the safety of vulnerable road users, such as children, pedestrians, cyclists and users of e-scooters. Drivers will also have to learn about:
f. Driving alternative fuel vehicles
The new rules allow a person with a category B licence to drive vehicles powered by alternative energy sources, such as electricity, hydrogen or biofuels (including emergency vehicles) up to a weight of 4.25 tonnes (instead of 3.5 tonnes). This is because those vehicles are often heavier, for example because of the weight of batteries.
g. Minimum age
The EU can lower the minimum age for getting a licence to 15 years (for heavy quadricycles or vehicles under 2.5 tonnes and with a maximum speed of 45 km/h) only within their territory and after securing the agreement of the Commission.
The minimum age to get a truck licence has been lowered from 21 to 18 years, and for a bus driving licence from 24 to 21 years, if the applicant has a certificate of professional competence. EU countries can allow 17-year-olds to drive a truck or van on their territory (only if accompanied by an experienced driver).
EU countries have until November 2028 to incorporate these rules into their national laws. Rules will start applying from November 2029 at the latest.
Cross-border driving bansThe EU has also adopted new rules to ensure that serious road traffic offenders are held responsible throughout the EU. EU countries will have to inform each other of driving offences and recognise driving bans in specific circumstances. The rules apply to driving bans imposed because of:
When an EU country imposes a ban of at least three months, and the driver has exhausted all courses of action against that decision, the EU country where the driver’s licence was issued will be notified.
The EU country that issued the licence will then notify the driver – when possible – within 20 working days and will decide whether to impose a driving ban that applies across the EU.
In certain cases, for example if the driver’s right to be heard in court was not complied with, or speeding did not exceed 50km/h, the EU country that issued the licence can decide to exempt the driver and not implement a ban.
Keep sending your questions to the Citizens’ Enquiries Unit (Ask EP)! We reply in the EU language that you use to write to us.
Author: Philip Ryan recently defended his PhD – Bureaucracy Mapping: Inclusive Design for Institutional Navigation – at University College Dublin. His research interests include inclusive design, bureaucracy, sociology, technology, user experience, trust, privacy, and migration.
Regulating technology can be difficult, and the current explosion of generative Artificial Intelligence (AI) has left even the companies producing the technologies struggling to keep up. Healthcare applications powered by general purpose Large Langauge Models (LLMs) are increasingly promising to completely change the provision of care. In medicine and in law expecting one tool to be a solution for everything has similar issues. AI as a panacea can be a poison as much as a cure, especially as it removes protection and agency for users and adds workarounds for necessary regulations. While the lack of service and capacity in healthcare could be addressed by products like “AI agents”, their implementation should not be allowed to escape the regulations already in place.
Should these technologies be subjected to regulation such as the Medical Devices Directives when used for diagnosis? Regulation must engage with technologies that engage with health no matter how their providers try to categorise themselves. When every major AI providers’ chatbot gives health related advice, the fig leaf of “not for medical or diagnostic usage” cannot be allowed to frustrate legitimate attempts to regulate healthcare. Technologies powered by AI are part of the tradition of promising a doctor in your pocket (Lupton and Jutel 2015). In 2025, the EU Artificial Intelligence Act (AI Act) is still taking effect against a world economy that is increasingly reliant on the unrealised promise of these technologies. The uncertainty of the Act’s interaction with healthcare still stands (Gilbert 2024). The horizontal approach of the Act makes it “necessary to adopt further guidelines to address the unique needs of the healthcare sector” (Van Kolfschooten and Gross 2025).
Health information online is increasingly provided through AI summaries, obfuscating the origin of the information, inventing falsehoods through hallucinations, and pushing action through confident sounding statements. In replacing web search results, which had become the go to information resource over the previous two decades, they create a dangerous new environment where previously somewhat reliable sources have been replaced by superior looking interactions (Gross et al. 2025). While there has been attention paid to the ability of these LLM’s to encourage self-harm (Yousif 2025), it will be harder to find obituaries attributed to AI’s good intentions and hallucinations, such as dangerous diets, as in a recently reported case of a man poisoning himself after replacing sodium chloride salt with sodium bromide (Burgard 2025).
While precise healthcare regulations may be required, many of the general rules could be used to protect from the excises of big tech. The Digital Omnibus will consider the General Data Protection Regulation (GDPR), e-Privacy Directive, the Free Flow of Non-Personal Data Regulation, the NIS 2 Directive, the AI Act, the Data Act, the Data Governance Act (DGA), and the eIDAS Regulation (European Commission 2025), and while not primarily concerned with healthcare they all affect healthcare provision. In some cases, healthcare uses provide exceptions to rules. How can clinicians provide care when they do not understand the ramifications of agreements between the user and companies who own the technologies used for diagnoses, that could follow them through their entire life? As new versions of healthcare become more reliant on their services, how can GDPR consent be meaningfully provided, if the technology is so encompassing that there is no meaningful option for healthcare outside of the technology itself.
Correcting the Duck
The European UACES 55th EUHealthGov panels in Liverpool gave me some great insight into the vibrant research into EU health policy. As quipped by a participant of panel, health policies are often like throwing the duck, taking their own path after their initial toss. Iterative bases of regulations such as consultation periods and full engagement with implementation and standardization phase of regulations are extremely important when controlling the development of healthcare. AI is not sustainable or desirable when it is attempting to turn patients into users.
The regression from health policy since the end of the COVID-19 pandemic, feels like a missed opportunity, and the realities of an ever-ageing population and more complex healthcare requirements must not be taken over by false promises of innovative technologies. Visibility and public engagement with regulator processes is difficult but vital. The importance of the patient role must be correctly identified and developed into something beneficial within the evolving spaces changed and developed by technologies. Infrastructure path dependency of AI cannot be allowed to decide what best practise is in healthcare. As these technologies become more all-consuming, healthcare cannot just be a pile of money and data to give AI more power.
The imagined future capacities of generative AI must be questioned, and related benefits and dangers be considered as they are inserted in more and more vital services. For example, the increasing environmental effect of using resource intensive AI in healthcare must also come into play. As per the concept of One Health, the health of the planet feeds into the health of its inhabitants. On the nose examples are coming out of the noise and air pollution caused by data centre developments (Tao and Gao 2025). Changes like the EU Commission’s Digital Omnibus exercise process could bring about deregulation and weakening of the Union’s protections, while inclusivity may seem disconnected from regulatory structures, the complexity of any agreement must be assessed, and comprehension by those affected by it should have some level of consideration within regulatory processes are to be imposed. Simplification and coherence should serve the EU’s long-term strategic vision for a competitive, secure, and rights-based digital economy rather than short-sighted deregulatory moves in favour of technologies that may not work.
Innovation and other techno-optimistic concepts are not solely positive. As Correy Doctorow (2025) highlights, the degradation of services can be seen as strongly linked to the ability to avoid regulations through shifting the activity to an application. It’s legal because they did it “with an app”, especially in the case of Uber (unregulated taxi company), Airbnb (unregulated holiday accommodation), and anything calling itself fintech (unregulated banking). Unregulated health is and will be as lucrative as it is harmful. Will Europe function as a fortress for its citizens, or vassal to new extractive practices of global corporations? While protections are designed into projects, deregulation and other forms of degradation could make initiatives like the European Health Data Space Regulation (EHDS) a pre-collection of sensitive information for legitimate or illegitimate actions. Information provided for a service years ago could be used in manners practically unimaginable at the time of consent to the data being stored, or sold on at the end of business (Church and Smith 2025). In the new realities brought about by these technologies, regulations will have to answer more scenarios and hopefully protect ever more marginalized users/citizens.
While it is possible for technologies to simplify healthcare delivery, reducing the power of related laws should be reviewed with caution. The deregulation agenda of technology companies, which can treat many of the harms caused by their actions as externalities and justifiable costs, should be viewed with suspicion, especially in healthcare. The incursion of companies like surveillance data broker Palantir, offers them unprecedented access to healthcare information (Osborne 2024) highlighting the value of such assets and the protean nature of business interests. The EU’s ability and appetite to create and enforce digital policy and data protection rules is currently singular, and adapting to more aggressive regulatory regimes, and the related race to the bottom should be part of the discourse, especially around healthcare.
References
Burgard, B. (2025) ‘ChatGPT Advice Triggers Bromide Poisoning, Psychosis’, Medscape, 10 Jan, available: https://www.medscape.com/viewarticle/chatgpt-salt-advice-triggers-psychosis-bromide-poisoning-60-2025a1000qab [accessed 1 Nov 2025].
Church, S. and Smith, G. (2025) ‘23andMe sells gene-testing business to DNA drug maker Regeneron’, Los Angeles Times, 19 May, available: https://www.latimes.com/business/story/2025-05-19/23andme-sells-gene-testing-business-to-dna-drug-maker-regeneron [accessed 27 Aug 2025].
Doctorow, C. (2025) Enshittification Why Everything Suddenly Got Worse and What to Do about It, London: Verso.
European Commission (2025) Digital Omnibus Regulation Proposal | Shaping Europe’s Digital Future [online], available: https://digital-strategy.ec.europa.eu/en/library/digital-omnibus-regulation-proposal [accessed 12 Nov 2025].
Gilbert, S. (2024) ‘The EU passes the AI Act and its implications for digital medicine are unclear’, npj Digital Medicine, 7(1), 135, available: https://doi.org/10.1038/s41746-024-01116-6.
Gross, N., Kolfschooten, H. van, and Beck, A. (2025) ‘Why the EU AI Act falls short on preserving what matters in health’, available: https://doi.org/10.1136/bmj.r1332.
Lupton, D. and Jutel, A. (2015) ‘“It’s like having a physician in your pocket!” A critical analysis of self-diagnosis smartphone apps’, Social Science & Medicine (1982), 133, 128–135, available: https://doi.org/10.1016/j.socscimed.2015.04.004.
Osborne, R.M. (2024) ‘NHS England must cancel its contract with Palantir’, BMJ, 386, q1712, available: https://doi.org/10.1136/bmj.q1712.
Tao, Y. and Gao, P. (2025) ‘Global data center expansion and human health: A call for empirical research’, Eco-Environment & Health, 4(3), 100157, available: https://doi.org/10.1016/j.eehl.2025.100157.
Van Kolfschooten, H. and Gross, N. (2025) ‘Invisible prescribers: the risks of Google’s AI summaries’, Journal of Medical Ethics blog, available: https://blogs.bmj.com/medical-ethics/2025/11/12/invisible-prescribers-the-risks-of-googles-ai-summaries/ [accessed 12 Nov 2025].
Yousif, N. (2025) Parents of Teenager Who Took His Own Life Sue OpenAI [online], BBC, available: https://www.bbc.com/news/articles/cgerwp7rdlvo [accessed 1 Nov 2025].
The post Treating it with an App: AI Techno-optimism Against Regulations appeared first on Ideas on Europe.