{"id":19008,"date":"2024-09-05T12:11:40","date_gmt":"2024-09-05T11:11:40","guid":{"rendered":"https:\/\/staging2022.42crunch.com\/?p=19008"},"modified":"2024-09-09T10:32:34","modified_gmt":"2024-09-09T09:32:34","slug":"when-gen-ai-meets-risky-apis","status":"publish","type":"post","link":"https:\/\/staging2022.42crunch.com\/when-gen-ai-meets-risky-apis\/","title":{"rendered":"When GenAI Meets Risky APIs"},"content":{"rendered":"\n\n\t

Webinar<\/h4>\n\n\t

Sept 26th, 2024<\/p>\n

PDT 9am | EDT 2pm | BST 5pm\u00a0<\/p>\n\t\t\t\tRegister to Watch the Webinar\n\t\t\t\t
\n
\n\n

As Generative AI adoption grows across the enterprise, so does the risk surface for potential data breaches and attacks. API security is a must have if you want to enable the responsible and effective deployment of GenAI technology.<\/p>\n

Large Language Models (LLMs) excel at processing and understanding unstructured data in order to generate coherent and context-specific text. Yet the real power of an LLM comes when it’s connected to enterprise data sources. These connections are typically enabled by APIs and configured as plugins to the LLM. Critically, if the underlying APIs are vulnerable, they may be exploited by anyone with access to the LLM and can have disastrous consequences.<\/p>\n

Join us for this interactive session as we demonstrate how GenAI can be used to exploit unsecured APIs to gain unauthorized access, inject malicious prompts and manipulate data. Also learn how to prevent your APIs from being undermined by adopting a proactive API security as code approach to defending your APIs.<\/p>\n

Why Attend?<\/strong><\/p>\n