“Is risk management too complicated and subtle for InfoSec?” — I think just mathematics is too complicated and subtle for some people

It’s interesting to see how knowledge of Bayesian methods exists in certain fields while ignorance of the details leads to weird conclusions concerning their usage. A good example of this phenomenon is this mangling of the two-envelope problem — supposedly a “paradox” that Bayesian decision analysis fails at — which is then used to argue that therefore Bayesian analysis of risks is actually useless and that instead

In the absence of reliable risk information, a similar approach to information security may be the best that we can do – just try different things and see which works the best. You might call this approach “experimental security.” There may be no better approach.

Yeah, just experimenting without any inferential tools makes sense… Funny how it allows the analyst to believe anything he wants without anything to back it up.

The takedown is painstakingly given here, but the only comment to it at the time of writing should make it clear just how entrenched the forces of “irrational pragmatism” are:

They Bayesian approach has many beautiful mathematical properties, but it fails to make contact with reality — it has no pragmatics. Worse, it fails to recognize that there is more than one person in the world. In the Bayesian world there is only one subjective probability, “mine”. The fact that you exist and have your own subjectivity that just might have something to do with our agreed-upon response to any particular problem is totally irrelevant. All the technical mathematical results in the world can’t get past these foundational problems.

Wouldn’t it be better to admit ignorance of the issues at hand and then give your opinions on that basis rather than just spout nonsense? There is clearly much education about Bayesian analysis to be done, starting with demolishing incorrect preconceptions that are already out there.

3 Comments »

  1. Thanks for commenting on my post. Yes, the field of information security seems to have a large number of people who are both carrying incorrect preconceptions AND they are doggedly persistent in defending them. If you think the discussion of Bayesian methods is bad, you should see the discussions about financial justification methods (ROI, NPV, and the rest). Sheesh!

    What is disheartening is that so many people are not willing to do any additional reading or study to learn about methods outside of their previous experience or training. The resistance seems to be cultural, philosophical, and emotional.

  2. This is not related to your post, but I was just thinking about how people either use or don’t use Bayesian inference methods in the real world. Have you ever seen the TV show “Cash Cab”? The contestants who make it all the way to the end of their ride before getting three strikes win money. They have a choice to keep the money or go for double or nothing on a “video bonus” question. What’s interesting is how people (usually groups of two or more) decide whether to gamble or not.

    They almost always decide based on their general philosophy about life and their risk preferences, rather than examining the relevant prior information of how well they did on the previous quiz questions (i.e. the Bayesian approach). In other words, people who choose to gamble often rationalize that decision by saying “We came in here with nothing… it’s found money… what the heck.” Those people who choose not to gamble often say something like this: “We have a sure gain of $XXX, let’s not risk losing it.”

    There are also many people who get into the Cash Cab who are very familiar with the show, and therefore with the Video Bonus. I’ve never seen anyone say, “Well… most people who try the video bonus get it right, so we probably will get it right.” (Yes, more people get the video bonus right than wrong.) That would also be a relevant prior information.

    I’m not suggesting that Cash Cab contestants should do any Baysian calculations to make their decision. But it’s interesting to see how they frame the decision so that most of the relevant prior information is excluded from consideration.

    • Mr. Bayes said

      Thanks for your interesting comments!

      I’ve never seen this show, “Cash Cab”, but from your descriptions I think you might have to revise your views slightly. When people think “I came with nothing and therefore I’ll just gamble what I have anyway, because I can’t do worse than getting back to where I started”, that’s (potentially) an application of Bayesian decision theory, where the decision taken is the one that maximises expected utility, a function not only of a person’s subjective beliefs concerning system parameters but also their utilities for possible outcomes.

      The definition of what is “rational” behaviour, even “Bayes rational” behaviour, is harder to pin down than you think!

RSS feed for comments on this post · TrackBack URI

Leave a comment