Artificial intelligence (AI) can already handle some tasks much more efficiently than humans. And we're only just getting started. AI integration opens up unprecedented opportunities for business and entrepreneurship. But as with any new technological advancement, there's always a BUT. In the case of AI, there are a number of legal considerations you should address before implementing AI in your business.

The pressure for the use or even full-scale implementation of AI in companies is growing, not only from strategic advisers, but also from virtually all suppliers of technological solutions, office apps, educational agencies, as well as employees themselves - enthusiasts of digitalisation and new technologies.

The legal implications of the use of AI tools are crucial in terms of intellectual property rights, personal data protection, confidential information, and other areas.

However, Czech companies are still getting used to the use of artificial intelligence and are testing how it can benefit them. A May 2023 survey by Blindspot Solutions, which queried top managers of large Czech companies about AI, revealed that about half of large Czech companies have not yet addressed generative AI in a comprehensive way. A total of 21% of companies use ChatGPT without limitations, while a fifth of companies regulate its use by internal guidelines, and 6% even strictly. Their primary concern is the inaccuracy of the information generated, and the loss or misuse of corporate data. And we, as lawyers, advise you to use the possibilities of rapidly developing technologies smartly, but also to keep in mind the associated legal risks.

Who is the author?

In the context of AI, we most often address the question of authorship of the outputs that AI generates, and consequently who holds exclusive rights to them. Whether it's lyrics "written" by ChatGPT, pictures "drawn" by Midjourney, videos "edited" by Pictory or music tracks by Amper Music. There is no comprehensive regulatory framework for AI yet (in the Czech Republic or the EU). Nonetheless, AI can be generally perceived as a set of algorithms, databases and other protected objects that can be legally regulated.

The use of AI-generated content is primarily subject to the terms and conditions of the tool providers. They may significantly restrict usage rights, or in extreme cases, prohibit you from using the outputs for commercial purposes. But if they don't do so, and you're using AI to create content based on mere "prompts" (text inputs from the user), the answer to the question of authorship and protection of the content created is rather ambiguous.

A clue in this context may be the recent recommendation by the U.S. Copyright Office that humans do not have ultimate creative control over AI tools, as prompts essentially act as "instructions" for the artist. Ultimately, it is up to the AI how it chooses to implement them. According to this approach, the output generated in this way is not the result of human creative activity and therefore cannot be protected by copyright and can be used in practice by virtually anyone. How Czech practice will approach this issue is, however, still uncertain. Some experts have argued that if the content generated by an AI tool is created by a technology that merely complements an individual's creative process without overshadowing their personal contribution, it can be attributed to a human creator. However, a clearer opinion on this issue is likely to be forthcoming with the first court cases on this issue, which may come relatively soon given the general popularity of AI tools.

AI and Ed Sheeran or J. K. Rowling

It’s essential to consider whether you're “feeding” any protected content into AI tools when you're generating outputs – for example, you might want to generate a melody based on Ed Sheeran's latest song for a corporate video, or write a new script based on the text of J.K. Rowling's book. Unless you are utilising such results for personal use only, the use of works protected by exclusive rights, such as images, melodies or songs, etc., generally requires the consent of the respective rightsholders.

Therefore, for the purposes of commercial use of such content, you must procure adequate licences, both for the original content and for any derivatives created by AI tools. This combination is not always commonplace. Don't be thus misled by some AI tool providers who claim that "all inputs and outputs belong to the user". Instead, always check that the rights to the inputs entered into AI and the generated outputs have been properly cleared. Ideally, you should acquire the source base content from official photo banks, video banks and other verified sources, which always define their licensing terms and usage conditions. For AI tool providers, always carefully read the rules for the use of the source base and generated content in the respective terms of use.

Attention, sensitive!

The advent of AI completely changes the established system of personal data processing. While your company may be adept at data handling, the nuances of how AI processes this data remain somewhat enigmatic. With AI’s rise, Silicon Valley is increasingly being dubbed “Cerebral Valley”. While it is too early to call it a conscious intelligence, it is still important to remember that millions of people around the globe are entrusting their private data to a technology that lacks complex regulation. Consequently, the exact extent of AI's data processing remains undefined, leaving the possibility for data to take on a “life of its own”.

At the same time, generative AI tools can easily compromise the confidentiality of documents, trade secrets, or other protected information. When your employee copies sensitive contract data into ChatGPT or uploads a confidential document for translation into the free version of DeepL, commercially sensitive information or documents can easily be exposed. In such cases, your company would be accountable for the breach of confidentiality. It’s thus imperative to instruct employees against inputting proprietary company details, or any information related to clients or fellow employees, into AI tools.

Even Homer nods

Also, remember that AI trains its abilities on the basis of sources, known as datasets, which may contain data whose processing requires third-party consent. Moreover, you have no guarantee that the sources do not contain fictitious or outdated information or even incorrect or false information. Take, for instance, the pair of American lawyers who used AI-generated content in a court submission without any prior fact check. It later emerged that the court judgements referenced by the AI did not exist at all.

Therefore, remind your employees that if they wish to use AI outputs, they are always responsible for ensuring that such content is factually correct and does not violate third-party personality rights, engage in unfair competition or infringe upon intellectual property rights. Failure to do so may of course have significant legal implications not only for the providers but also for the users of these AI tools, i.e., you and your employees. As such, we strongly recommend thoroughly checking the relevant inputs and outputs.

Robot's ethics

Don’t overlook the ethical issues too – an AI system is only as impartial as the “impartiality” of the data it’s trained on. Again, it is always necessary to check and verify that AI outputs avoid any discrimination and unequal treatment, such as in employee recruitment or assessing the creditworthiness of a loan applicant. Overlooking this may also raise further legal complications for users of AI tools.

we currently see two approaches to AI tools - enthusiastic optimism, typical for start-ups or technological innovators, and the more reserved approach of larger companies, which are well aware of the pitfalls associated with introducing these tools.

What’s the way to go?

In practice, we currently see two approaches to AI tools – enthusiastic optimism, typical for start-ups or technological innovators, and the more reserved approach of larger companies, which are well aware of the pitfalls associated with introducing these tools into day-to-day business, whether they're technical, procedural, or the legal issues highlighted above.

Additionally, some companies have prohibited the use of online AI tools for both professional and personal use completely, mainly due to the risk of potential information leaks or claims for intellectual property rights infringement.

Banks are typically the most sceptical about AI tools (e.g., DeutscheBank, Goldman Sachs, JPMorgan Chase & Co.). Given the strict public regulation (especially of banking secrecy), their caution is justified. However, some tech giants such as Samsung and Apple are also taking a cautious approach, with the latter even issuing a ban on the use of online AI on the same day it independently launched the ChatGPT app for its own iOS operating system. Selected AI tools have also been banned by some countries, not only non-democratic regimes such as China or Russia, but also Italy has temporarily imposed a ChatGPT ban, citing concerns over possible misuse of personal data.

While AI tools can streamline a number of internal and external processes, taking business activities to a whole new level, we recommend approaching the implementation of these tools with caution. Ideally, ensure that both the implementation and use of AI tools are carried out in a way that complies with your company's relevant contractual, legal and other obligations.

Adopt technical measures and establish internal guidelines within your company. Train your employees properly. Involve experts in the development of your corporate AI strategy and collaborate closely with team members on its rollout. The fact that this is not a "Mission Impossible" can be seen from the example of Česká spořitelna, which has introduced an internal ChatGPT model for its employees and has also adopted the necessary safeguards to reduce the risk of leaking sensitive data.

Here it comes!

While legislation typically lags behind technological progress, the most relevant upcoming legislation regulating AI at the EU level is the forthcoming AI Act, which proposes a number of obligations for AI providers and users. Following initial approval by the European Parliament, this draft is currently under deliberation by EU institutions and it aspires to become the world’s first legislation to comprehensively regulate AI by late 2023.

The AI Act aims to classify AI systems based on their level of risk, from the lowest, such as video games, which will be essentially unregulated, to those with "unacceptable risk" that will face outright bans (e.g., social credit allocation systems or some biometric identification systems).

High-risk AI systems, e.g., AI systems for the management and operation of critical infrastructure (such as road transport or energy supply), will be subject to strict controls under the forthcoming legislation. The aim is to ensure that these systems are not only risk-compliant but also adhere to relevant technical standards. In addition, generative AI tool providers should be obliged to label AI-generated content and publish training data that fall under copyright protection.

If you are not developing AI, the AI Act will have a rather marginal impact on you. On the other hand, developers of riskier AI systems are likely to have to exercise greater caution before launching their solutions on the EU market from 2026 onwards, ensuring that their technologies undergo comprehensive clearance and meet the relevant legal standards.

This could lead to a significant change in the dynamics of the technology market, where major market players could decide to "bypass" the EU and move their technology hubs to more legally flexible jurisdictions. We therefore hope that the final text of the AI Act will represent a balanced regulatory approach that considers the need for technological progress as well as prudence and the necessary level of security.

Carefully and thoughtfully

When using AI tools for business purposes, we always recommend involving competent experts and, if possible, choosing a business-oriented AI tool business model, as the functionality of free versions can be restrictive for commercial use.

AI certainly brings huge potential for optimising and streamlining corporate processes. However, to make the most of these advances while safeguarding yourself from potential legal challenges, you should proceed in an informed, proactive way and with due care. The use of AI tools for business purposes should definitely not be seen as a taboo, after all, we ourselves also often use them. However, always do so ideally with the "blessing" of a lawyer.

Ten pieces of legal advice

1
Check the relevant terms of use of AI tool providers to understand the regulations surrounding commercial output usage.
2
Ensure you're either using original content or, if sourcing from repositories like photobanks, verify licensing terms. Confirm that your AI inputs are properly cleared (i.e., you have the necessary permissions) and the generated content doesn't violate any exclusive rights.
3
Do not enter any sensitive information or data into AI tools. Instruct all employees on this as well.
4
Consistently check that AI outputs are factually true. Verify all information from another trusted source.
5
Verify that AI-generated outputs do not contain elements of discrimination.
6
Ensure the adoption and utilization of AI in your business aligns with all the company's contractual, legal, and other obligations.
7
Introduce technical measures and guidelines concerning AI usage. Ensure staff training is thorough and up to date.
8
Involve experts in the development of your corporate AI strategy and engage in discussions with your team members.
9
Opt for AI solutions tailored for business applications over generic, limited free versions.
10
10. Stay updated on AI's evolving legal landscape, such as through resources like the HAVEL & PARTNERS blog, for the latest in legislation, court rulings, and sector advancements.