When GenAI became part of everyday working life in 2023, it did not really arrive with a long build up. One minute it was something you heard about in passing, the next it was suddenly everywhere. The speed, the creativity and the sense that ideas flowed more easily were all intoxicating. What took longer to register was the scale of the risks sitting just beneath all that momentum. These are not speculative fears about AGI. They are immediate, practical risks already shaping the way professionals work.
The most obvious is hallucination. GenAI can be confident, fluent and entirely wrong. It can invent sources, dates, facts and quotations with the same easy assurance it uses to summarise something perfectly accurate. Without careful checking, that fluency becomes a trap. In the visual world, you see it in people with six fingers, shadows that go in the wrong direction or figures that appear to be hovering a few inches above the pavement.
This is not theoretical. In Australia, a government-commissioned report was partly refunded after auditors found AI generated errors, including fabricated references in the footnotes. In another case, a parliamentary submission included entirely invented case studies generated by a public AI tool, which led to reputational questions for the organisation named in the document.
Copyright is another real and present concern. Models learn by absorbing and remixing vast amounts of material, much of which sits in legally ambiguous territory. Outputs can look original while echoing existing works more closely than intended.
Then there is data exposure. Otherwise careful professionals often paste sensitive information into prompts without stopping to consider where that data may travel. One rushed moment can create a problem that is slow and painful to unwind.
Perhaps the most significant issue, however, is behavioural. GenAI’s polish creates a sense of completion before any meaningful scrutiny has taken place. It encourages shortcuts. It alters expectations around speed. It tempts people to skip the steps that normally keep work accurate, compliant and ethically sound. The technology itself is not dangerous, but our instinct to trust its shine can be.
Why this needs to be taught first
This is why responsible GenAI use should be taught at the very beginning, not bolted on later as a compliance reminder. The first lesson people learn tends to become the default habit. If that first experience is playful but unstructured, the habits that follow are often casual and sloppy. If the first experience combines experimentation with a clear sense of boundaries, people are far more likely to keep judgement switched on.
In the same way we once taught basic digital hygiene, search skills and email etiquette, we now need a shared baseline for GenAI. Knowing how to write a clever prompt is useful, but knowing how to question the answer is essential. That is the difference between a party trick and a professional skill.
What good practice looks like
My time working as a business partner to the Ethics, Risk and Compliance function at Novartis sharpened this understanding. I was surrounded by people who approached these emerging risks with clarity, seriousness and an impressive level of professional discipline. They were rigorous without being alarmist and curious without being careless. That mindset influenced me more than any piece of technology ever could.
The experience reinforced a simple conviction. GenAI can do remarkable things, but it is not magic and it is not neutral. Anyone using it needs to understand where it excels, where it collapses and how quickly it can lead you astray. Responsible GenAI use is not about slowing people down for the sake of it. It is about keeping judgement switched on while everything else accelerates.
Slow down, double check and question the source. Think carefully about what you feed into it. Treat the tool as a partner, not a shortcut. Good judgement is the safeguard that keeps everything standing.
The role of organisations: Create sandboxes, not shortcuts
For organisations there is another responsibility. If people are pushed towards public tools because there is no secure alternative, risky behaviour is almost guaranteed. If they are given a properly governed, sandboxed environment with clear guardrails, good practice becomes much easier.
That means investing in enterprise-grade tools, setting sensible policies, and making it simple to do the right thing. It also means building GenAI into onboarding and training, so that responsible use is framed as part of being a modern professional, not a specialist concern for a few enthusiasts.
GenAI can strengthen creativity, but only if responsibility grows with it. That is the skill modern professionals need most, and the lesson from this moment that is worth keeping close.