Dr. Tippett is on tour to let the world know about the data breach investigations report that his team put together and published earlier this year. At the very least, the presentation was entertaining, but there were even some interesting bits here and there.
Dr. Tippett is a scientist.
Assume that someone says: We need to patch one per day.
Tippett's view, that is a hypothesis and a hypothesis needs to be
tested to determine its validity. These tests can be performed either
by analyzing data, or by conducting a controlled experiment.
many cases, Tippett claims, testing a hypothesis (we need more of
product X) will show that the marginal benefits of deploying more (of
the same) technology does not outweigh the marginal costs. For example,
patching once a day instead of once a month might be much more
expensive than the costs that are averted by it. If that hypothesis is
proven to be true, patching once per month instead of once per day
would be a colossal waste of resources. The costs would not outweight
In an ideal risk-assessment scenario,
sufficient data is available to estimate such a risk (defined as:
likelihood ∙ impact) before a decision must be made, rather than in
hindsight after a solution has been implemented.
organization lack the body of experience to be able to compute these
risks at all, or at least in a way that is statistically significant
enough to be usable. Most organizations are unwilling (or unable) to
design and execute an experiment and draw conclusions based on the
outcome of those experiments.
These two observations are the death-blow for a formal risk management approach to information security.
sufficient reliable data becomes available (at reasonable costs),
organizations will never be able to build their information security
programs based on a formal risk management approach.
such data does become available (and it is starting to), the IT
security landscape will change. Until then, risk management will be
predominantly something we talk about, rather than practice.