October 2008 Archives


Back in July, I blogged that I had passed my CISM exam. Today I was pleasantly surprised that all the paperwork had cleared and that I am now officially certified.

Dear Dr. Kees Leune, CISM,CISSP

Congratulations! We are pleased to inform you that on 31 October 2008 the CISM Certification Board approved your application and awarded you the Certified Information Security Manager (CISM) designation.

What's next? We'll see. It's probably time for something more technical. Maybe a SANS class, or maybe something more off-beat, such as the training programs offered by Offensive Security. For the time being, I think I'll just ride the flow a bit and see what comes my way.

I regularly get questions of students who expect to graduate soon asking what they need to do to get started in the information security field. Unfortunately, I cannot give a straight unambiguous answer to that. What I can do is start a thought process for that student. In the end, they will have to do the work.

I've been trying to get some very simple buffer overflow code proof-of-concept to run for quite a while. While I always thought I had a good understanding of what a buffer overflow is, and how it can lead to Badness, I was actually never able to recreate a deliberately vulnerable application and exploit it.

The problem that I kept on running into was that I was unable to point the instruction pointer to a point in my NOP-sled that would be correct the next time that the code was invoked.

Well, as it turns out, the problem wasn't with my code, or with my exploit, but in the Linux kernel that I used. After spending an insane amount of time trying to Google up why it didn't work, I finally found that I was missing one simple step:

# sysctl -w kernel.randomize_va_space=0

As it turns out, 2.6 Linux kernels randomize the address space of new processes when this option is set to '1', which is exactly the problem that I had. This makes a great deal of sense, since there is really no need to NOT do this. Adding that kind of randomization makes buffer overflows much harder to pull off.

Another option that is useful to know about when you want do demonstrate buffer overflows is --no-stack-protector option on gcc. Without that option, gcc adds a guard variable to functions with vulnerable object which adds extra code to check for buffer overflows, such as stack smashing attacks.

These two bits of knowledge finally helped me connect the dots and allowed me to finish up my demo code.

While listening to an Educause webcast on Red Flag Compliance, the FTC announced that it would not be enforcing compliance on the Red Flag Legislation until May 1, 2009. That is a major relief and takes a lot of pressure off the remainder of this months. In the mean while, check the FTC site for the formal announcement.

Taking up research again?

After having completed my PhD-research, I have been mostly out-of-touch with what is happening on the academic side of life. Consulting and "doing things" have been very enjoyable and I do not regret for a second that I stopped being a researcher.

However, since I am teaching at a college again, I have also been starting to feel the itch of doing research, (co)authoring and --hopefully-- publishing papers.

Once disconnected from academia, it is very hard to get back into, and I expect to spending several months reading up and figuring out where the scientific tide has taken the community.

Yet, before I set sail and try to make an serious effort at getting back into doing research, I need to decide what topics are currently worth while investigating, and which appeal to me.

So, having said this, please let me know! Comment to this post, contact me, or send me an email message. I look forward to hearing from you!

Dear Security Professional,

SANS is bringing Security 504: SANS Hacker Techniques, Exploits and
Incident Handling to your local community in out popular Mentor hands-on
format! Beginning on January 7, SANS Mentor Kees Leune will be leading
this class in Garden City, New York. For complete course details,
please click on http://www.sans.org/info/34049.

Before registering, please contact me for some referral information!

This week's topic of the computer security class that I teach was reconnaissance. The amount of information that is "out there", available for an attacker who wants to build a profile of his target is overwhelming. The things that we discussed today weren't very advanced or outlandish, but they were generally knew to my students (undergrads). Here are some take-homes:

  1. Don't underestimate the amount of intel that can be found on social networking sites, such as LinkedIn, Facebook, Myspace, Twitter. It will be almost impossible to control what gets posted, so make sure that you know what information is there. Search for your organization and for your key employees and see what information is posted. Be aware of what others can find out about you as a target and act accordingly.
  2. Be creative with search engines; check Johnny Long's Google Hacking Database. While you are there, order a copy of his book and support charity. Play around with the Goolag scanner to figure out what you can find.
  3. Maltego is awesome; use it, play with it, and learn from using it.
  4. Don't list anything in whois records that you do not have to. Do not list names, email addresses,  titles, street addresses, etc. if you do not absolutely have to. Instead of a real name, list a job function. Instead of an individual's email address, list a functional email address. If you do list an individual's email address, make sure that the first part of the email address isn't also the user's login. List a P.O. Box, rather than a physical address. Real names and email addresses can be used for social engineering, physical addresses can be used for site visits (for example, to search for WiFi bleeding)
  5. Use split DNS and do not allow zone transfers.
  6. Most of all, abide by the adagio: don't post online what you don't want to be found

While reading RSnake's latest post, I cannot escape the feeling that he's in a very gloomy mood today. His advice:

"The truth is, if you have something interactive connected to the Internet, it's probably exploitable in some way, and really, it's not that terrible of a thought considering it's pretty much always been that way."

As gloomy as that may sound, it is something that I run into regularly.

Too many people assume that the next new (web) app that is getting deployed 1) is absolutely essential for the continuity of the company and 2) must run on an internet-facing web server.

Air-gapping  a system is probably not that feasible in this day and age (although I still see self-contained networks with only a dial-out modem that gets unplugged when not in use), but using common sense when deciding on the visibility of a system can never hurt!

The psychology of access control

Most businesses that are serious about identity management and logical access control have adopted Role-Based Access Control (RBAC) as a model to govern who has access to what.

In its most simple form, RBAC is extremely simple: an individual should be assigned permissions not based on who he is, but based on which role he plays. The role-based access control model has been extensively researched (including by me) and the mechanics of the approach are fairly well understood.

However, paying attention to how a technology is used is just as important as having that same technology available in the first place. In other words, the psychological factors surrounding the adoption and use of an access control model deserves as much attention as the model itself. I wish I had realized this when I was doing my PhD research.

Into the breach

Unfortunately, I have not had much time to read lately. The only time I really get to see a book is just before bed and then I usually don't read more than a few pages. Because of this, I was a little skeptical to take on two new titles: the new school of information security and Into the Breach. The latter one is at the top of my current reading stack for a number of reasons. First of all, Michael handed it to me personally at Defcon. Secondly, because it has much less pages, and the chances that I actually finish the book are somewhat greater.

Having said that, I just finished part 2 of the book and my opinion of the book is already a very positive one. Santarcangelo captures the true essence of modern information security: information exists to serve users, and users just want to get the job done. Most people are truly willing to do the right thing, but they need to be enabled and empowered to do so.

When a person is confronted with having to chose between finishing the job in a timely enough fashion for senior management to proceed, versus full and unquestioning compliance with information security controls that might prevent him from getting the job done, it is clear what that choice will be.

Just realizing that is paramount.

Information security must never get in the way of doing business.

And yes, that implies that an information security officer must actually know what the business is all about and how it is conducted.

Essential truth: Never say No.

Reliable Security

"We are at an interesting juncture today; there are no threats to information technology for which we do not have the tools to combat them" Reliable Security, Steven J Ross. Information Systems Control Journal, Volume 5, 2008, pp 9-10.

Whether or not the author truly believes that this statement is true, it is a definite attention-getter. Other phrase in the article reads "[...]it appears that we know what to do to achieve information security, but we are not doing it". 

My first thought after reading these statements was that the author has no idea what he is talking about, or that he is trying to start a flame war. However, given that the author holds a director position at Deloitte, his thoughts may deserve more consideration.

After giving it some thought, I realized what the flaw in this logic is.

I actually agree with the point-of-view that most (if not all) technology-borne threats can be mitigated or removed.

However, what must never be forgotten is that preventive controls come at a price. And while it may be true that all technology-based vulnerabilities can be mitigated (for example, by stopping to use the technology altogether), the cost of doing so might simply not outweigh the risk of doing it.

FIRST Liaison

The Forum for Incident Response and Security Teams is the premier organization and recognized global leader in incident response.

FIRST brings together a variety of computer security incident response teams from government, commercial, and educational organizations. FIRST aims to foster cooperation and coordination in incident prevention, to stimulate rapid reaction to incidents, and to promote information sharing among members and the community at large.

I recently joined the FIRST community as a liaison member, and I am looking forward to contributing to the community, as well as benefiting from the large professional network and body of knowledge that it provides.

New York Information Security Community

Today was the first meeting of the New York Information Security Community (nyinfosec). nyinfosec provides an information security focus group for Long Island-based practitioners, but attendees from New York City are also welcome.

While the number of attendees was a little disappointing, the content of the discussion was great. We covered a great amount of topics, ranging from netflow analysis tools to DMCA infringement notices, via a little sidestep to policy development and incident response. 

All participants agreed that the initiative adds value, and that the meetings should continue. A new date has not yet been set, but we're shooting for a cycle that has meetings every 2-3 months.

If you are a security practitioner on Long Island or in New York City, please let me know and I'll be glad to provide you with more information.

Latest Tweet