Shortly after I joined PayPal, right before the historic eBay split, I overheard a conversation that stuck with me for years.
An eBay executive was speaking with a team from a newly acquired company, walking them through integration priorities. The team was eager, full of ideas about what they could build, the features they wanted to develop, the innovations they wanted to pursue.
The exec cut through all of it with something I'll never forget. He told them the best thing they could do was the hard work of connecting to PayPal's infrastructure. "It's not flashy. It's not innovative. But it's what's going to create value. Don't spend the next four years building science experiments. Build real value."
At the time, I filed it away as good general advice. It took me a few more years of leading security teams to realize it was one of the most important strategic principles I'd ever heard. Because security teams run science experiments all the time, and it's one of the biggest drains on our effectiveness as an industry.
What Makes Something a Science Experiment
A science experiment in security is any project where the primary motivation is technical curiosity rather than business value. It's the custom tool your team builds because the problem is fascinating, not because the existing solution is inadequate. It's the proof-of-concept that lingers on the roadmap for three straight quarters because nobody wants to admit it doesn't have a clear outcome. It's the initiative that looks great in a conference talk but doesn't move the needle on protecting the business.
These projects share a few telltale signs. They solve problems that are intellectually stimulating but not urgent. They duplicate capabilities that already exist in mature vendor products. They consume your best engineers' time, which is your scarcest resource. And they're almost always more fun than the hard, unglamorous work that actually creates value.
That last point is the trap. Security engineers are smart, curious people. They're drawn to interesting problems. That's a strength when it's channeled toward your organization's unique challenges. It becomes a liability when it's channeled toward rebuilding things the market has already solved.
Interesting Isn't the Same as Important
I get the appeal. I've felt it myself.
Early in my career as a security leader, I invested significant time and resources into building capabilities that made my team look cutting-edge. We had dashboards that impressed executives, custom integrations that demonstrated technical sophistication, projects that made for great conference presentations. What we didn't always have was a clear answer to the question: "Is this making our company meaningfully more secure than a simpler approach would?"
The honest answer, more often than I'd like to admit, was no.
Here's how it happens. You see a gap in your security posture. Instead of evaluating whether an existing product solves it well enough, you start thinking about how you could build something better. Your engineers get excited because building is more interesting than configuring. Six months later, the project has taken on a life of its own. Nobody wants to kill it because of the sunk cost and because it's become someone's identity on the team. Meanwhile, a vendor could have had you covered in weeks, with the added benefit of threat intelligence from thousands of other customers.
The uncomfortable truth that nobody puts in their LinkedIn bio: the work that creates the most security value is usually not the work that wins awards or gets you invited to speak at conferences. It's making sure your vulnerability management program actually covers everything and that patches get applied on time. It's ensuring your identity and access management is tight and that offboarding actually works. It's running tabletop exercises that are realistic, not theatrical. It's building relationships with business leaders so you understand what they actually need protected and why.
This work is repetitive. It's unglamorous. It requires discipline rather than brilliance. And it's where the vast majority of your security value comes from.
I've watched teams with half the budget and a fraction of the engineering talent outperform teams with massive resources, simply because they were relentless about the fundamentals. They didn't have impressive demos. They had airtight processes. They didn't present at conferences about their custom tooling. They presented to their board about measurable risk reduction. That's a very different kind of success, and it's the kind that actually keeps your company safe.
Where This Fits in the Bigger Picture
I've written about the Core, Context, Commodity framework in Part 1: the idea that security teams need to focus their innovation energy on their Internal Core, the problems that are unique to their specific company. Science experiments violate this principle because they burn innovation calories on problems that aren't unique to you at all. They just happen to be interesting.
If you've accepted that security is a Context function, and that your Internal Core is protecting your specific company, then these pet projects become easy to identify: they're any project where your team is innovating on a commodity problem. Technically challenging? Sure. Strategically valuable? Rarely.
How to Spot Them on Your Roadmap
I ask my teams to pressure-test every project on their backlog with a few simple questions:
What specific business outcome does this produce? Not "improved security posture" or "better visibility." Those are too vague. What will be different, measurably, when this is done? If the answer takes more than two sentences, that's a warning sign.
Does this solve a problem unique to us? If the answer is no, if this is a problem that hundreds of other companies also have, then the market almost certainly has a solution. Your job is to evaluate and implement that solution, not to build your own.
What are we not doing while we do this? Every project has an opportunity cost. The engineer building a custom tool is an engineer not working on the hard, company-specific problems that only your team can solve. Make that tradeoff explicit.
Would we be better off in 18 months buying or building? Custom tools require maintenance, documentation, and institutional knowledge. When the engineer who built it leaves, you inherit a liability. Vendor tools come with support, updates, and a roadmap. Think about the long game, not just the build phase.
If a project can't survive these questions, it's probably a distraction dressed up as innovation. That doesn't mean it's worthless. It means it shouldn't be on your roadmap competing for resources with work that directly protects the business.
Fast on the Right Things
I want to be clear: this isn't an argument against speed. It's the opposite. Vanity projects slow you down by consuming your best people on work that doesn't produce outcomes. The fastest security teams I've seen are the ones with the most discipline about what they say yes to. They move quickly and decisively on the hard, real work because they're not distracted by the shiny stuff.
Execution in security isn't about doing more. It's about doing the right things and finishing them. Every engineering vanity project on your roadmap is a real project that's not getting done.
The Discipline to Say No
Killing one of these projects is one of the hardest things a security leader has to do. You're telling a talented engineer that their work, which is often technically excellent, isn't the right use of their talent. That conversation requires care.
The framing matters. This isn't about the quality of the work. It's about where the work is directed. The same engineer who built a beautiful custom tool could be applying that same creativity to a problem that only exists in your environment, something no vendor can solve for you. That's where you want their brilliance.
I've found it helps to be explicit about the distinction with your team. We talk openly about the difference between engineering curiosity and business priorities. It's not a judgment on the person or the idea. It's a strategic filter. Some of the best ideas I've seen started as exploratory projects and, after honest evaluation, turned out to have a real business case. The filter isn't meant to kill creativity. It's meant to channel it.
None of this means you should ban experimentation entirely. Curiosity is one of the most valuable traits a security engineer can have, and you don't want to crush it. The key is to create space for exploration that's bounded by outcomes. Give your team time to prototype and explore, but attach every experiment to a hypothesis about business impact. "We think this approach could reduce our mean time to detect by 40%" is an experiment worth running. "This would be cool to build" is not. When experimentation is tied to measurable outcomes, it stops being a science project and starts being R&D. That's a distinction worth preserving.
Where Your Engineers Should Be Spending Their Time
Your best people should be working on problems that require deep knowledge of your specific environment: your technology stack, your data flows, your business logic, your customer promises. That's the work that no vendor, no matter how good, can do for you. Contextualizing threat intelligence for your specific architecture. Building detection logic tuned to your application behavior. Designing security controls that fit your engineering culture. Integrating security into your product in ways that feel like a feature rather than a tax.
That work is hard. It requires creativity, deep thinking, and sustained effort. And it's where security teams earn their keep. Everything else is a candidate for commoditization, and every vanity project you kill is an engineer freed up to work on something that actually matters.
The discipline to focus on value over novelty isn't natural. It has to be built, reinforced, and modeled by leadership. But once your team internalizes it, the results speak for themselves: less noise, more impact, and a security program that the business actually trusts.