Sunday, September 30, 2007

Security Is Business

Infosecurity Moscow has finished. I have to say if you have not attended the conference you lost nothing. Most interesting thing that I found there are friends for live conversation.

Everybody is trying to stick a label on their products with a precious word "security". You could find there a traffic analyzer, rebuilt for security needs; most secure document workflow system; a boy who was trying to sell sniffer that can recover passwords from broadcast segment; companies, known to nobody, which have the magic word on their advertisement.

Known security firms are trying again and again to sell their products to us - new, fully rewritten version of antivirus, new IDS system with a new glossy interface, new security scanner (after re-branding).
So many conference sessions made by people who are just managers and sales. They can not answer any single question, which was not mentioned in the presentation.

It seems like nobody can make good identity management system, because it is complex. Nobody can make real log management system with good analysis engine, because it is hard. Nobody can give you advise, because it is their know-how.


I am attending Infosecurity Moscow again next year, but only for networking with friends.

Thursday, September 27, 2007

Can You Prove An Axiom?

Several days ago I spent a lot of time and effort trying to convince my system administrators that it’s not possible to change computer account’s password in MS Active Directory 2003 using nmap. Their main argument was that Microsoft Premium Support’s consultant said that it is possible!

Well, I posted this as joke but I would never think, that it can concern me directly. Of course, there were smart guys that realized words: “nmap is port scanning tool that doesn’t try to guess password”, so thanks a lot to them, but it’s really pity that I was ought to spend time and nerves to prove this axiom not only to highly-skilled administrators, but also to Microsoft Premium Support’s consultant. It’s very sad truth, and unfortunately I see no future with such IT.

Tuesday, September 25, 2007

Does Spam Still Work?

I got a spam message recently. There was nothing unusual about it, just plain stock scam:


Then I wondered if such schemes really work and checked price on Tuesday - it was up from (as correctly mentioned in the message) $0.1 to $0.155, which is 55%. Image is not very good, but the trend is clear (image from Yahoo! Finance):


Anyone want to guess what will happen next? It is also notable that there was increase in the number of transactions with these shares couple of days before the spam message.

Take care.

Friday, September 21, 2007

When Cyber Security Meets Physical Security

Cyber security and physical security are very interdependent. For example, most of the time having physical access to some server means having full access. But more concerns to emerge - sometimes subtle and dangerous. See the excellent point from the SANS ISC Diary: Pen Testing - Dangerous side effects?

Monday, September 17, 2007

Will We Ever Learn?

http://www.theregister.co.uk/2007/09/17/vista_hit_by_stoned_angelina/

"A batch of laptops pre-installed with Windows Vista Home Premium was found to have been infected with a 13-year-old boot sector virus."
(Also in Russian). Are we looped and doomed to discover "good old" vulnerabilities (remember beta versions of Vista?) and viruses over and over again? I guess the number of signatures in modern antivirus software became so huge that antivirus vendors are getting rid of old viruses to decrease the size and improve speed.

Project Management Fun

Classical picture, found here:

PIN-Protected Flash Drive

I got my Corsair Flash Padlock yesterday and had some time to play with the beast a little. I didn't expect any silver-bullet type of thing and it provided exactly the features I expected.
Remember Schneier's five-step risk analysis process (presented in Beyond Fear, and also described here, for example)? Second step is "What are the risks to the assets?". To determine whether the device would fit your purposes it is essential to understand who are you protecting your data against. You cannot expect any device without high-grade encryption (proven algorithm, strong password etc. - most of us know encryption is hard to implement properly) to provide any protection against determined attacker. Even with encryption, there are myriad of ways to attack the system - from shoulder-surfing to keyboard sniffers and "thermorectal cryptanalysis" (in Russian). But what about ordinary, non-corporate user? I'm sure in most cases people would worry about accidental disclosure of their private information rather than a determined (neighbor?) that will be able to disassemble the device and access data directly.

On the positive side authentication is performed in hardware and the device is platform independent and does not require installation of any software on the computer to use - meaning also no administrative rights are required for operation.

Conclusion
Affordable price, ease of use and platform independence makes the Corsair Flash Padlock nice option, as long as you understand the limitations. Basically, data is not encrypted on the flash memory and adequate protection is provided only against accidental loss and unskilled attacker. It is certainly a step in the right direction.

Additional Links
http://www.corsairmemory.com/products/padlock.aspx
http://www.corsair.com/_appnotes/AN701_Padlock_USB_Flash_Drive_08212007.pdf
http://www.schneier.com/blog/archives/2007/08/padlocked_flash.html
http://www.clevx.com/datalock.html

Friday, September 14, 2007

Пословицы и правда жизни: кто на чьих ошибках учится

Умный - учится на своих ошибках,
Глупый - не учится вообще.

Грустные мысли: эффект отсутствующей плитки.

Суть проблемы заключается в том, что, к сожалению, внимание людей обладает потрясающей способностью замечать негатив, а положительное – с большим трудом.

Представьте себе помещение, идеально выложенное кафельной плиткой, прекрасно подобрана палитра цветов, идеально обработаны швы. Но вот вы увидели, что одной плитки не хватает и в ее отсутствие зияет темная дыра, в которую просматривается стена с элементами былой краски, штукатурки и прочих строительных материалов свидетельствующих о долгой и насыщенной истории данного помещения. Согласитесь, что после этого мысли о том, как хорошо и красиво все сделано куда-то пропадают….

Более того, под влиянием подобного ничтожного недочета люди, как правило, не только перестают восхищаться хорошим результатом, а напротив, начинают находить еще более мелкие огрехи, совокупная масса которых приводит к кардинальному изменению мнения о проделанной работе. К нашему общему несчастью, мораль такова, что ничтожная ложка дегтя не просто может, а, практически точно испортит бочку меда.

Это 100% работает и в случае проектов. Минимальный прокол способен свести на нет колоссальный труд, полностью переориентировать недавних союзников и они, подобно сорбенту будут собирать и аккумулировать всю грязь, которую только можно найти. Негатив, подобно снежному кому будет накатываться, угрожая гибелью проекту.

Несомненно – это еще одно из человеческих предубеждений, но именно это и влияет на принятие решений, а рациональный подход, к сожалению, опаздывает.

Влияние человеческих предубеждений на принятие решений прекрасно описано в эссе Брюса Шнайера Психология безопасности.

Fun

http://thesource.ofallevil.com

Tuesday, September 11, 2007

MTBH

There are a lot of abbreviations in use in IT and IT Security - it's not bad and it's not good, it's just a fact. I was thinking of introducing a new one - MTBH.

One can encounter a lot of "holy wars" between lovers of this or that operating system, and for the unprepared user it is always hard to anticipate any opinion, because there are a whole bunch of factors that are presented and being discussed. Here I propose the (yet) hypothetical factor that can help to choose a desktop operating system - Mean Time Between Helpdesk Calls (MTBH).

Besides usability (which is hard to measure, by the way), it is important for end user that his/her personal computer does not require much maintenance and does not cause much trouble. Do you remember last time you did "virus cleaning
exercise" for some friend or relative? When considering corporations - they are also interested in minimizing support/operational costs and would prefer software (you can include office or other software in such evaluation as well) with big MTBH. This would result in integral factor, depending on ease of use, reliability, stability and resistance to various computer threats out of the box.

That being said, I don't argue that market share in the desktop operating systems will dramatically change anytime soon, but to be sure that you use optimal solution you need unbiased criteria when going through the selection process, and MTBH (when we manage to measure it) may become one of them.

Анализ журналов кеширующего сервера

Проведено небольшой анализ журналов работы кеширующего сервера (proxy).
Результаты обнадеживают: примерно 7% серверов, к которым обращаются пользователи, являются кандидатами на блокирование. В свете последних новостей, в число внешних сайтов представляющих угрозу для компьютеров пользователей, попадают рекламные сети. Методы, использованные в процессе анализа, позволяют, кроме выделения почтовых сайтов, форумов и т.д., детектировать, в том числе, банерные сети.
Отчет о проделаной работе.
Приложение к отчету.

Monday, September 10, 2007

Browser security

Recently I mentioned here (unfortunately available only in Russian) that number of discovered vulnerabilities does not indicate level of security. It is a rather strange assertion that Windows is more secure than Linux because it has less discovered faults during specified period of time than Linux.
Here
are facts that show that situation actually is reverse.

On figure 6 we can see charts for Remote code execution vulnerabilities in IE, Firefox and Opera. Using Microsoft's logic IE should be safer... But unfortunately things are not so simple - see figure 7. If the idea is not obvious - read text between these figures:

... As shown in Figure 7, these input URLs that resulted in a 0.5735% of successful compromises of Internet Explorer 6 SP2 did not cause a single successful attack on Firefox 1.5.0 or Opera 8.0.0...

Awareness training: misleading applications.

There are materials (Misleading Applications: faking left, running right, Misleading Applications – What you need to know, KYE: Malicious Web Servers and others) about client security. IT and IT security can fight against such threats on infrastructure level (Web filtering - URL/Content/Category, Anti-virus/-malware/-spyware/-crimeware/etc.) but unfortunately it's not enough because new attack technologies trend to target people as the weakest link in the chain of security countermeasures using social engineering. New kind of such deceiving software - misleading applications - is not exception.

In this short post I outline some very simple rules that can help ordinary people to protect themselves and significantly lower risk of being attacked via Internet clients:

  • Control your patch level and patch level of your antivirus.
  • Do not visit unknown sites.
  • Do not believe unknown sites. If site tries to persuade to install something that will do you good, consult with your IT/IT security. Do not install software from the Internet.
  • Do not open e-mails you don’t expect or from somebody you don’t know. Do not open attachments or click links in such e-mails.
  • Switch off unneeded functionality in client. For example, if you don’t need JavaScript, disable it in your browser.
  • Do not start Internet clients (browser, e-mail client, IM client, etc.) with admin privileges
  • Be paranoid, If you feel suspicion do not hesitate to contact your IT/IT-security.

Friday, September 7, 2007

Does The Quantitative Risk Analysis Suck?

To clarify the issue right off, I am considering information security risk analysis here. Although some of the aspects may be applicable to other areas of risk analysis and management, it is not a purpose of the article to cover them.

What are the practical benefits of risk analysis?

Most of us think that risk analysis is useful, but what do we mean by that? What are the practical benefits of spending your time on the task of risk analysis? I would outline these two major benefits:

  • Risk analysis results allow to justify the spending on the information security programme.
  • Risk analysis results help prioritize the information security efforts.
To understand whether some risk analysis method is adequate means to me assessment if it can accomplish these goals. Please also note, that good risk analysis method should not only comply with the goals, but also yield correct and reproducible results, which leads to the next section:

Can we trust the results?
Quantitative risk analysis results depend on the method used and correctness of the source data. Most risk analysis methodologies are based on the formula of:
ALE = SLE * ARO
(ALE - Annualized Loss Expectancy, SLE - Single Loss Expectancy, ARO - Annualized Rate of Occurrence). Therefore, to get correct (and reproducible) results we need to:
  • Correctly identify list of threats to the system (as overall risk is the sum of the individual risks). This step is required both for quantitative and qualitative risk analysis.
  • Correctly assess the probabilities (ARO) and impact (SLE) for each threat.
It does not matter whether you try to calculate SLE from other formulas, like:
SLE = AV * EF
(AV - Asset Value, EF - Exposure Factor). Now you have to deal with another unknown value that is hard to quantify - Exposure Factor.

We know several industries that are based upon risk assessment - insurance and financial institutions being among them. Most of them assess risks based on statistical data (several hundred of years of statistical data exist in insurance industry) and extensive theories. Do we have these in IT security? I don't know any of them and may only guess that most of the time these numbers are expert opinions. Keeping this in mind, can you get results that can be acted upon?

Various methodologies
Given unreliable source data it is obvious that it does not matter what risk assessment methodology you use
(here, I mean not the process, but the formulas). I think that basically what methodology can give you is who should be involved in the process, how it should be organized, what documents to produce, how to structure and simplify the process, etc.
Given the amount of effort that is required to perform the comprehensive risk analysis before starting the process one should decide whether the effort worth it.

As a summary, is the quantitative risk analysis worth close look? Definitely. Is there a mature methodology that can be used rights now and produces correct and reproducible results? Have not seen such yet.

Related
"
Get 50 practitioners in a room and you will have 50 different methodologies for assessing IT risk. The trouble is that nearly all of them will be subjective – the outcome of any risk assessment exercise is most likely to be ‘high’, medium’ or ‘low’. Even when it’s an apparently objective number -- 54,821, for example – you don’t learn all that much. Try going to your board and telling them that their IT risk is 54,821 and their eyes are likely to glaze over very quickly! Any attempt to calculate ‘annual loss expectancy’, although valiant, only results in trouble when the degree of variability is larger than the sum itself!"
IT Risk Assessment – Fact or Fiction? (Symantec)

Thursday, September 6, 2007

Регулярный мониторинг

Горячо любимый многими г-н Шнайер недавно поднимал тему отсутствия мониторинга за системами, которые существуют и работают, что определяет их неэффективность.
Теперь давайте оглядимся. Та-ак, что там у нас в сети творится:

  • подсистема антивирусной защиты – когда вы последний раз изучали вирусную активность в вашей сети и принимали меры?
  • подсистема резервного копирования – когда вы последний раз изучали ее журналы, на предмет поиска неудачно выполненных заданий?
  • подсистема обнаружения атак – когда вы последний раз смотрели ее журналы и пытались разобраться с зарегистрированными событиями?

Думаю, многие ответят – давно.

В чем проблема? Системы есть, но толку от них зачастую нет. Я уверен, что в 90% случаев система остается либо с настройками по умолчанию, либо в состоянии, в котором ее оставил «залетный» интегратор. Ну, иногда администратор системы немного ее ковыряет, но слишком часто встречаются администраторы, которые по 2-3 месяца изучают подсистему и в момент проверки не знают, что означают те или иные настройки, как работают и настроены рассылки уведомлений генерируемых системой и т.п.

Давайте опустим худший вариант, когда администратор просто не следит за системой потому-что не хватает знаний. Возьмем в принципе типичную картинку, когда что-то, генерируемое подсистемой, все-таки доходит до администратора. Что случается в этом случае? Тоже ничего хорошего.
Любая, не настроенная подсистема обнаружения атак генерирует сотни уведомлений в день. Подсистема антивирусной защиты, может завалить администраторами десятками писем о том, что обнаружены вирусы в файлах, находящихся на карантине. Подсистема резервного копирования ежедневно может сообщать о некритичных ошибках и в результате фильтруется все, включая важные для подсистемы события. Результат - администратор просто не обращает внимания на то, что генерируют подсистемы.

Но ведь проблему можно решить, просто ограничив поток информации идущей на администратора. Понятно, что существует вероятность того, что будут пропущены события, которые пропускать совсем не нужно. Но уж лучше плавно и равномерно идти к идеалу, чем с момента внедрения системы знать, что толку от нее никогда не будет.
То есть, вменив в обязанность администратору не абстрактное «мониторинг подсистемы ХХХ», а обязательную обработку N-событий, разбор инцидентов либо настройку правил фильтрации, мы начинаем контролировать процесс обслуживания подсистемы.

В сухом остатке:
  • временные затраты администраторов на обслуживание подсистем легко прогнозируются;
  • руководство всегда может оценить загрузку специалистов;
  • можно спрогнозировать время, когда система окажется настроенной под конкретную КИС и будет реально обеспечивать необходимый уровень информационной безопасности.

Wednesday, September 5, 2007

Crimeware

Trend is hear for some time and is clear for everyone - organized crime has entered computer security scene. There are many ways to make illegal profit, and malware writers are obviously in high demand. Should we invent the new term for the situation - crimeware? Does it really matter if it is a virus, a trojan or a spyware? Is it reasonable for end user to distinguish among them?

Antivirus market is consolidating and maybe it is the time for a new name also - anticrimeware, or at least antimalware.

Related links:
Cyber crime tool kits go on sale (BBC)
"Cyber Crime Toolkits" Hit the News (Bruce Schneier's blog)

On virtualization rootkits

Recently I read this (second part) that makes me feel I mainly on Joanna’s side. Moreover some ideas in that interview prove my previous thoughts about virtualization after Amiran’s post explained in comment. In this post I’ll try to explain my point of view.

I think that the main thing is that hosted OS can just figure out that it’s running in virtual environment, but even this guess can be cheated. As far as all other applications (and of course all kind of detectors) executed on the top of OS, use it as interface to hardware, they blindly believe OS as trusted instance and as it’s so there is no means for them to investigate the fraud. In my comment I stated that software malicious agents can be detected by software detectors but the success depends on who have started first. That’s why such software-software (software badness – software detectors) methods are like Red Queen’s race. Also it’s obvious that software evil can be easily detected on hardware level. Now let’s look at virtualization rootkits – they sit right between hardware and system software (OS). How can OS detect this cheat if to deceive OS is the main idea of virtualization. I completely agree with Joanna that these rootkit detectors can just understand that they are in virtual environment and only implicitly (side channel, etc), not more. This rather more philosophical issue – let’s try to think as OS do – yes, we’ve figured out that something wrong, that we are on virtual machine, but how can we understand that our virtual environment is malicious? What does it mean malicious? How to detect this? The only technology we have to struggle malicious code is pattern matching. It doesn’t matter whether patterns are behavioral or code. How can we realize that our virtual environment is malicious whereas our eyes, ears, nose etc are not able to see more then OS?

I have an analogy. Suppose you’re in the plane on the flight. Is it possible to understand whether you’re above Moscow or above London? You can realize that you on the flight only because see sky through window, hear noise, feel G-forces. But all this are evidences provided by plane. What if noise-cancelling techniques and no windows – you will even have no opportunity to realize whether you’re in the air or on the ground.

Thomas Ptacek mentioned here that there are methods to detect unexpected virtualization. Actually I have no ideas what is unexpected virtualization. Unexpected for whom? OS? Is there expected virtualization? How to decide if virtualization is expected or not? Too many questions.

Thomas also enumerated these methods (in the interview there are three approaches explained). But there are two things against:

- These detect virtualization, not virtualization rootkit, - this is exactly what Joanna stated: “Unfortunately authors failed to prove their claims and all they presented was just a bunch of hacks of how to detect virtualization, but not virtualization based malware”.

- It’s not good idea in general to try to find malware hoping that it has bugs and that, may be, will provide you with evidence. What if malware will not have bugs?

Nate Lawson added: “…Our key finding is that it will always be easier to detect a (hypervisor) rootkit than it is to write perfect cloaking code for one. When you have a choice, it's always better to be on the side where software bugs benefit your goals. Our code is minimal and is less than 1000 LoC while New Blue Pill is about 7000 LoC. Adding support for hiding from our particular set of checks would increase the size of NBP even more.” It sounds rather strange to me, because AFAK nobody compares cost of producing malware with cost of detection/prevention means. It seems that it’s wrong to state that if it’s more difficult to write virus then antivirus nobody will write viruses any more :-). Cost of virus is better to compare with revenue of its actions (Unfortunately nobody writes viruses for fun nowadays) – something like ROI for virus (malware, rootkit, worm, trojan, whatever).

I agree with statement: “Five years from now, everyone's desktop operating system will be virtualized by default; rootkits won't have any opportunity to load themselves into hypervisors directly, because there will already be a hypervisor present, and it won't want to share” and it supports my idea that the winner is who starts first in software fight against software malware.

To conclude this bizarre post I’d like to repeat thought that it’s more reliable to struggle software badness from hardware level and it’s what Joanna said: “… It passed a year and we still don't have any good method for virtualization malware detection and I don't believe we could have any without the help from hardware.