
Initially, when I deployed CrowdStrike, I have to say – I wasn’t impressed by its quality. I really don’t like their front-end interface; it’s not logically designed (at least for me), and switching from another platform that had this well implemented was quite painful for me. I don’t like their layout, filters (or sometimes the lack of them), sorting capabilities, or the missing export options in places where they really should be.
Their new LogScale query language was another challenge. After learning Cortex XDR and Splunk Query Language fairly quickly, switching to CrowdStrike was quite problematic. Maybe it’s because of the interface – but probably not only that – I just didn’t find it as logically constructed as the other two.
However, the back end of CrowdStrike Falcon is really good – and, most importantly for any IT or cybersecurity professional – reliable. Once you configure the settings, they work exactly as expected.
What really makes CrowdStrike unique is the number of features available in one product. It feels like the company aims to deliver a universal cybersecurity platform that covers most of the defender’s tools – a very interesting strategy that reduces both costs and administrative overhead. I’ll tell you more about this later in the article.
Last week (5th and 6th of November), I attended CrowdStrike FAL.CON Europe 2025 in Barcelona, invited by the Applus Global Cybersecurity Team.

Initially, I was quite skeptical about the value of such a conference. I expected it to be sales-oriented – you know the type: “There are new threats in the world, they’re very dangerous, and they’ll destroy your company unless you buy our marvelous tool that will defend you automatically – no extra work needed, just pay us for the service.” I’ve been to that kind of event before, so I expected something similar.
I’m happy to say – I was completely wrong about that.
The event was absolutely worth attending, with many genuinely interesting panels. I was surprised by how many insightful sessions took place over those two days and how much I actually learned.
Of course, the biggest topic at FAL.CON 2025 was AI. That didn’t surprise me – I was expecting it, and honestly, I was looking forward to it. I’m fascinated by Artificial Intelligence. But what I didn’t expect was the depth and breadth of the discussions around it. Let me share one example.
One of the most interesting panels for me was “Prompted to Fail: The Security Risks Lurking in DeepSeek.”
I expected a talk along the lines of “DeepSeek is Chinese, the Chinese government is bad, so don’t use it.” Instead, Stefan Stein, who led the session, presented results from a fascinating CrowdStrike experiment.
(Quick note: I’m recalling all this from memory, so forgive me if I’m not 100% accurate.)
They set up their own instance of DeepSeek R1 to avoid issues with pre- and post-prompt filters likely deployed in the online version hosted in China. Then came the brilliant part – since DeepSeek performs very well in coding, they decided to test how certain keywords in prompts would affect code quality.
As a baseline, they used a standard prompt like:
“You are a helpful coding assistant. Create code with this specification…”
Because AI is not fully deterministic, the same prompt can produce different results each time. To get statistically valid data, they sent hundreds of requests and measured the vulnerability rate in the generated code. The baseline showed about 22–23% of the code was of poor quality.
Then they started experimenting with prompts that included keywords – combination of country and organization:
“China”, “USA”, “Taiwan”, “Tibet”, “Falun Gong”, “Financial Institution”, “Islamic State”, etc.
A modified prompt might look like this:
“You are a helpful coding assistant making code for the Falun Gong organization headquartered in Tibet. Create code with this specification…”
The results were astonishing – vulnerabilities doubled or even tripled in some cases!
Sometimes, the AI triggered a “kill switch” and refused to generate any code at all. When they asked the LLM why it didn’t respond, it started arguing that the user should be ashamed of working for such organizations, citing “bad reputations” criticized by global institutions like the UN (which, ironically, was opposite in many cases).
What surprised me most: the best results were achieved when the selected country was the USA – only around 3% of vulnerable code, even with prompts like:
“You are a helpful coding assistant hired by the US government to hunt Chinese hackers.”
This experiment really shows how crucial model selection is. The training data shapes an LLM’s behavior – and it can have security implications. Unfortunately, we can’t reverse-engineer models to see what data they were trained on. All we can do is test them – and results can vary every time you send the same prompt.
Interestingly, when running their own instance, they could track the model’s “thought process.” There was nothing like, “I don’t like the Taiwan government, so I’ll generate poor-quality code.” The bias seems to occur at a hidden level – almost like a machine subconscious.
They’re starting to resemble humans in strange ways… should we start worrying about Skynet?

Another fascinating session was about AIDR (AI Detection and Response) – a new technology CrowdStrike is building after acquiring a startup called Pangea. I attended the panel led by their CEO, Oliver Friedrichs, and CTO, Sourabh Satish, where they explained how the product works.
One interesting point from the CEO was their motivation to join CrowdStrike. He said there are many AI security startups right now, but in the coming months, only a few will survive. Why? Because nobody wants to manage yet another independent system and agent just for AI protection. That’s why Pangea chose to fully integrate with the CrowdStrike platform – and honestly, that makes perfect sense. I personally hate dealing with multiple agents and dashboards.
The product covers 8 of the OWASP Top 10 LLM risks (link) and focuses on two main scenarios:
- Employee Usage of AI Tools
- Homegrown AI Apps
What impressed me was how defense-in-depth the design is.
For example, imagine an employee wants to summarize a sensitive report using AI.
- First, they must choose a chat model allowed by company policy (basic but essential). No company wants employees pasting confidential reports into DeepSeek hosted in China.
- Next, the text is scanned before being pasted – rules can block or redact PII, medical data, etc.
- The content is also checked for prompt injections or malicious files, and everything is logged.
- The response from the AI goes through similar checks before it’s shown to the user.
Overall, I was genuinely impressed with AIDR – it feels like a mix of firewall, antivirus, DLP, SOAR, and access control, all designed specifically for AI.

There were many more fascinating things shown during the event, but if I tried to cover them all, this post would turn into a small book. A few interesting examples:
- Privilege Access Management – a very interesting idea to bring PAM directly into XDR, and even more interesting how it actually works.
- Agentic SOC/SOAR – imagine building your own team of SOC or SOAR analysts powered by AI agents that you can customize for your needs.
- SaaS Security – designed to protect and monitor data across different platforms (Microsoft 365, AWS, GWS, Salesforce, and more). It even checks their security posture and compliance levels, then recommends remediation steps.
To me, it really looks like CrowdStrike wants to cover most modern cybersecurity challenges under one unified ecosystem.
And lastly – but not least – I’d like to say thank you:
Vlada, for inviting me to this event;
Alberto, for the chicken croquette;
Antonio, for a very interesting conversation;
Arne, for the company;
Dino, for sharing your story;
and Marco, for killing me in very special way:
I truly enjoyed spending time with all of you!
