Generative AI and information security require management knowledge

New technology enables benefits that we could not previously imagine. In this environment, it is up to us users to understand and apply the tech companies' underlying business models in a responsible manner. The discussion about different forms of AI and information security is already hot, but still in its infancy. Jane Easterly and Pontus Wärne steel among many others discuss this recurrence. AI is certainly not new, but it is only now that there is enough computing power at a reasonable price to enable more people to train models with huge amounts of data.  

At the same time, there is a lot to think about. Generative AI is a type of artificial intelligence that creates (generates) new content, such as text, images, music or code. It is trained on large amounts of data and learns to generate similar content based on the patterns it learns.  

The more specific the data the model is trained on, the more specific answers it can generate. This means that we have a potential security problem with both the data we train the model on, as well as the information we enter in the prompt. It is a decisive advantage to think "security by design" and then use already established and existing frameworks for information security such as the ISO 27000 series. 

When we now develop AI on a broad front and use generative tools in our operations, it is important not only to be swept along by the pleasure and potential of the news: 

  • Content Security: AI should not generate content that is harmful to anyone physically, emotionally or financially. This includes avoiding creating content that may be offensive, offensive or inappropriate. 
  • Data privacy: AI should respect user data privacy and not store or share personal information it may come across during conversations. 
  • Fairness and impartiality: AI should strive to be fair and impartial in its responses, and avoid showing prejudice against certain groups or individuals. 
  • Transparency and accountability: Users should be able to understand how and why the AI generates certain responses. In addition, there should be mechanisms to hold AI accountable for the answers it provides. 
  • Respect for copyright: AI should respect copyright and not generate content that violates these laws.

We can take an example from our own business: At Ekan Management, we are currently developing support for tender management and staffing of assignments based on generative AI. In the work of creating and training a specialized model, we have also had to set a standard for the information that we supply and train the model with. We cannot manage a model for generative AI as if it were a software or a data warehouse. black boxthe aspect, the deliberately fuzzy handling of data in transit and storage, as well as the fact that a generative AI cannot forget, are some significant reasons why handling needs to be stricter than we're used to. 

Of course, there is always a risk that new technology and its possibilities are used in an incorrect way, and unfortunately it has already happened. At the same time as we learn to use generative AI to develop our work and value creation, we need to understand and consider, for example, "deep fakes", false identities, privacy violations and copyright infringement. These are issues that cannot be delegated to "IT", or other parts of the business - it is a management issue, as it is about fundamental strategic prerequisites for the business.  

Feel free to contact us at Ekan and we will help you proactively manage this in your business. Tomorrow's reality is something that you shape yourself to some extent.