While I am not the High Priest of Unit Testing, one thing that I have seen several times is tests written for compliance uh I mean coverage sake that are effectively unfailable. Things like "assert that system.string does stringy things" and the sort. The sort of tests that are only handled the favorable condition. These are harmless really, in the unit testing space. They make the checkin longer, sure, and eat up SLOC, but in general, pretty benign.
And while I am decidedly not a specialist in unit testing, I can speak to QA quite a bit, because that's usually the excuse developers give when I bring up failing tests. They are depending on QA to order the sfdeljknesv. They should be doing that in their unit tests.

In the application security world, unfailable tests are a real issue. How do unfailable tests make a difference in the security world? The issue is: there is never a view into what happens when the application fails, even at the unit level. That's kind of specific, but you can see generally what I'm talking about. How does the application respond if something isn't available, doesn't happen, doesn't add up.
For a glance into that a little bit more seriously, I strangely enough found a post by Terence Tao, a prominent mathematician who wrote about unfailable tests in the premises you give an AI before you set a specific prompt. Kind of a built-in prompt misengineering of the interaction with the AI, except not at a chatbot level. With these mathematician people, they're speaking a little bit closer to the metal and using the AI in the old-fashioned sense of machine learning with a big dictionary of stuff to use.
What Mr. Tao took out of this was that genAI is specifically lacking a 'failure mode' where it just tells the user "I'm sorry Dave, I can't do that" and gives the user an option to change the query. Think of a regular search engine. If you search for something really obscure, and it isn't in the index, you get zero results. The search failed. That is an important part of the entire user interaction. The tools he uses have features that enforce a failure mode, which lets him move his debugging to a more detailed form.
When I go to look at applying this to application security and unit tests, I think of the hard-to-isolate problems that the lack of a failure mode is causing in generative AI. When a lawyer submits a court document that references cases that don't exist, that's a breach of sorts. The system has failed the user because the system failed to fail.
Let's look at a specific example. It's a stupid simple example, but it illustrates the point. Imagine you have a method that divides two numbers, like my excellent Divide function:
public static double Divide(double numerator, double denominator)
{
if (denominator == 0)
throw new DivideByZeroException("Denominator cannot be zero.");
return numerator / denominator;
}
More often than not, we will follow the happy path with our test and make sure the division is right, ya know?
[Fact]
public void Divide_SixDividedByTwo_ReturnsThree()
{
int result = TimeHelper.Divide(6, 2);
Assert.Equal(3, result);
}
This is all well and good. We gave it numbers, it gave us the quotient. We made sure it met all of the expected input, right? Or did we? Aren't there two, known, expected results to this method? You bet, we have defined an error condition, and that should be tested for too.
"Bill, is this going to get around to appsec before the heat death of the universe?"
Yes, it will I promise.
When a system misbehaves, it is extrordinarily important to know how it will react. Users will just see errors, but attackers will see opportunity. I talk about this a lot in my information disclosure talk. The attacker causes a fault in the system that returns errors, they research those errors, and then exploit the findings.

When I perform application vulnerability analysis, I stop when I see an error from a DBMS. If I got an error, I will be able to exploit it. When I see an error with a stack trace, I stop. I know that's exploitable too. It's as solid of a bet as there is in the pentest biz. Making sure that each individual unit in an application fails in a known manner when something really weird happens is key. It prevents the creation of opportunity.
So the next time you are slotting in tests for a new function think about the lizard. And maybe buy him a beer.