09 April 2014

Monitoring

Posts relating to the category tag "monitoring" are listed below.

04 December 2012

Denial of Service Attack Defences

Another recent paper from Securosis addresses defending against denial of service (DoS) attacks.

The title sheet from the paper 'Defending Against Denial of Service Attacks'

Defending Against Denial of Service Attacks examines the types of attacks prevalent currently, and methods to maintain availability and minimise the adverse economic effect. The paper begins by identifying the threats‐protection racketeers, hacktivists, cyber war, exfiltrators, competitors, and business success itself.

The types of attack are described and defences for networks and applications are described. For applications, building security into the software development life cycle, web application firewalls (WAFs), anti-DoS devices and service providers, content delivery networks (CDN) are described. The need for a multi-faceted approach to application DoS protection is recommended in the paper.

I think some applications will just be more problematic than others and avoiding security vulnerabilities, minimising the attack surface and building in application-specific attack detection and response will help here too.

The paper includes links to further insightful sources of information, and recommends that to be effective, the process for defending against denial of service attacks needs to include activities before, during and after an attack.

Posted on: 04 December 2012 at 08:00 hrs

Comments Comments (0) | Permalink | Send Send | Post to Twitter

15 June 2012

Preparing for AppSec EU 2012 in Athens

I am looking forward to the upcoming OWASP AppSec Research 2012 in Athens from 10th-13th July. The organising team have put on a great programme.

Photograph of a a fire alarm control panel

My main participation in the four days of activities will be:

I hope you are attending both the training programme and three-track conference, so please flag me down and say hello. Registration is open, and there are conference discounts for OWASP, ISACA and ISC2 members, and also for students.

Posted on: 15 June 2012 at 07:59 hrs

Comments Comments (0) | Permalink | Send Send | Post to Twitter

06 June 2012

Three Infographics - Part 2 - Cybersecurity Soft Spot: Software Applications

Last month Veracode, who publish the State of Software Security Report posted an infographic on their blog highlighting cyber security risks in publicly listed US companies.

Partial image of the infographic from Veracode's 'Cybersecurity Risks in Public Companies'

Cybersecurity Risks in Public Companies Infographic draws together data from the Verizon Data Breach Investigations Report 2012, the regularly updated Web Hacking Incidents Database and Veracode's own reports.

Quite a useful pictorial if you want to provide a snapshot of some of the key issues.

Posted on: 06 June 2012 at 11:00 hrs

Comments Comments (0) | Permalink | Send Send | Post to Twitter

25 May 2012

Tricolour Alphanumerical Spaghetti

Earlier this week I heard that my talk about vulnerability severity ratings has been accepted for OWASP AppSec Research 2012 in Athens in July. The title of the presentation is "Tricolour Alphanumerical Spaghetti" which I need to explain.

Coloured strands of spaghetti laid out in the arrangement of the Athens' metro map ( http://www.amel.gr/typo3conf/ext/sa_map/pi1/files/print_en.html ) with the location of Evangelismos station highlighted, the nearest station to The Department of Informatics and Telecommunications at the University of Athens where AppSec Research 2012 is being held

Do you know your "A, B, Cs" from your "1, 2, 3s"? Is "red" much worse than "orange", and why is "yellow" used instead of "green"? Just what is a "critical" vulnerability? Is "critical" the same as "very high"? How do PCI DSS "level 4 and 5" security scanning vulnerabilities relate to application weaknesses? Does a "tick" mean you passed? Are you using CWE and CVSS? Is a "medium" network vulnerability as dangerous as a "medium" application vulnerability? Can CWSS help? What is FIPS PUB 199? Does risk ranking equate to prioritisation? What is "one" vulnerability?

Are you drowning in a mess of unrelated classifications, terminology and abbreviations? If you are a security verifier and want to know more about ranking your findings more meaningfully, or receive test reports and want to better understand the results, or are just new to ranking weaknesses/vulnerabilities and want an overview, come along to this presentation. It will also explain why the unranked information-only ("grey" or "blue"?) findings might contain some of the best value information.

In the presentation, I will outline techniques commonly used, or referenced, to rank application security weaknesses including:

  • Common Vulnerability Scoring System (CVSS)
  • Common Weakness Scoring System (CWSS)
  • Guide for Conducting Risk Assessments (NIST SP 800-30 Rev. 1 DRAFT)
  • Microsoft's STRIDE and DREAD
  • OWASP Risk Rating Methodology
  • OWASP Top Ten
  • PCI DSS Security Scanning Procedure vulnerability classification
  • Software Engineering Institute (SEI) OCTAVE
  • Standard for Security Categorization of Federal Information Systems (FIPS PUB 199)
  • Custom methods (and tester's experience)

The relevance to application security, advantages and disadvantages of each will be compared. The relatively new Common Weakness Scoring System (CWSS), co-sponsored by the Software Assurance Program in the National Cyber Security Division (NCSD) of the US Department of Homeland Security (DHS), will be described in some detail. This will include an explanation of the Common Weakness Risk Analysis Framework (CWRAF).

The presentation will also examine how impact is calculated and discuss why the direct business impact may not be the only thing you need to worry about. In this part, the counting of weaknesses will be discussed and why all of this is important from a compliance perspective. Five contrasting issues (system information leakage, personal data exposure, cross-site scripting, SQL injection and a non-security PCI DSS compliance issue) will be used to calculate example rankings using the OWASP Risk Rating Methodology, CVSS and CWSS. The methods and results will be compared and contrasted for different types of applications (website, web service and mobile app) in different business contexts. Finally the presentation will provide a list of issues to check before you commission assessments to make sure the results are meaningful.

Conference and training registration is now open. AppSec Research 2012 is being held at the Department of Informatics and Telecommunications at the University of Athens. The nearest metro station is Evangelismos.

Posted on: 25 May 2012 at 07:31 hrs

Comments Comments (6) | Permalink | Send Send | Post to Twitter

17 April 2012

Data Breach Investigations Report 2012

At the end of March, Verizon published their 2012 Data Breach Investigations Report. Again it is packed full of useful, well-presented, data.

Figure 22 - Hacking vectors by percent of breaches within hacking - from the report ''  indicating how web applications remain the third most common attack vector overall

The report shows that many breaches are the results of more than one threat action (malware, hacking, social, misuse, physical, error and environmental). However, hacking accounted for 81% (58% for larger organisations with over 1,000 employees) of breaches and 99% of data records (same for larger organisations), and as the chart above (Figure 22) shows remote access/desktop services was the most common hacking vector, followed by backdoor or control channel, and thirdly web applications.

Figures 32 and 33 provide some great data on the scale of records lost for different varieties of data (authentication credentials, bank data, classified, copyright, medical information, organisation data, payment card data, personal data, systems information, trade secrets). From these we can get a feel for the average size of a breach for each data type. Unsurprisingly the number of records lost per "trade secret" event is about 1. For personal data it is around 2 million.

The data on timespan of events by percent of breaches (Figure 40) continues to show the short time from initial attack to initial compromise and initial compromise to data exfiltration (both in minutes), the long average time to discovery (several weeks), and from then until containment/restoration (weeks).

There is perhaps too much emphasis on counts of records lost, but of course this is a "data breach" report. The report states that it makes "no claim that the findings of this report are representative of all data breaches in all organizations at all times ". There is clearly a heavy bias to retailers (e.g. type of staff roles, recommendations referencing point of sale), and thus those organisations within scope of standards from the Payment Card Industry Security Standards Council (PCI SSC). However, data was gathered not only from Verizon but also from Australian Federal Police, the Dutch National High Tech Crime Unit, the Irish Reporting and Information Security Service, the UK's Police Central e-Crime Unit, and the United States Secret Service. So it is not just Verizon's paying clients.

Remember, you don't need to lose data to have an incident or a loss. I'd like to see reports titled:

  • 2012 Attacks Without Data Loss Investigations Report
  • 2012 Data Alteration and Destruction Report
  • 2012 Breachless Fraud & Misuse Report
  • 2012 Undetected Incidents Report
  • 2012 Service Unavailability Investigations Report
  • 2012 Reputation, Risk and Resolve

We have that data, yes? Oh, ...maybe not.

Posted on: 17 April 2012 at 19:05 hrs

Comments Comments (0) | Permalink | Send Send | Post to Twitter

17 February 2012

APM Through the SDLC

On Wednesday evening I attended another meeting of the London Web Performance Group at the Lamb Tavern in Leadenhall Market.

Photograph of the speaker Martin Pinner and London Web Performance Group organiser Stephen Thair at the Lamb Tavern in Leadenhall Market, London, 15th February 2012

The subject was Application Performance Management (APM) across the Software Development Life Cycle (SDLC). Martin Pinner described a history of application performance & service availability measurement and management, and how it includes end user experience monitoring, transaction profiling, application discovery & instrumenting, deep-level component monitoring and analytics. He explained that APM needs to be addressed through the SDLC — during development, in test and under operation — across all architectural tiers, and across development, staging/UAT and production environments.

At one point he surveyed the audience of about what technologies they were working with for web, application and database servers:

  • Apache HTTPD was most in use, far ahead of IIS and anything else
  • PHP and Java were roughly equally used, trailed by .Net and then others like Node.js and C++
  • MySQL was most in use, followed by MS SQL Server, with a small number of people using everything else (Oracle, DB2, CouchDB, MongoDB, Hadoop systems, etc)

The presentation included pointers to many useful free and commercial products for different APM requirements, and rather than trying to repeat that, you will be able to download the slides once have been published (I will update this post).

Photograph of the ticket and name badge for the London Web Performance Group's meeting 'APM across the lifecycle' on 15th February 2012

A friendly group, and much for me to learn about in this area.

Posted on: 17 February 2012 at 06:05 hrs

Comments Comments (0) | Permalink | Send Send | Post to Twitter

13 December 2011

Updated and Improved Guidance on Use of Cookies, Etc.

The UK's data protection agency Information Commissioner's Office (ICO) has updated the previous guidance on the use of cookies and similar tracking technologies, under the revised Privacy and Electronic Communications Regulations which came into force on 26th May this year.

Cover from the ICO's updated 'Guidance on the Rules on use of Cookies and Similar Technologies'

In a press release today, organisations were warned they are not doing enough during the lead-in period to formal enforcement.

The updated Guidance on the Rules on use of Cookies and Similar Technologies provides concrete advice and practical guidance on the legal requirements, their interpretation and what are considered acceptable practices. The guidance was issued as a result of a review of progress to date which shows a lack of knowledge and action from web site owners. Of most concern are likely to be persistent cookies, cookies issued by third parties, cookies issued immediately a user visits a web site, are used for any sort of profiling or which span multiple website hostnames or multiple domains.

If you have any analytics, advertising, tracking or content provision by third party web sites, beware — you may just find the terms and conditions of service state you are responsible for obtaining and managing consent.

If you are a web site owner, take note and act now, if you have not already done so. From May 2012, the ICO will be accepting complaints from users, and will then contact web site owners to ask them to respond to the complaint and explain what steps they have taken to comply with the regulations. Therefore, document what you are doing and the decisions taken.

Posted on: 13 December 2011 at 15:21 hrs

Comments Comments (0) | Permalink | Send Send | Post to Twitter

21 September 2011

AppSensor Summit at AppSec USA 2011

Following a successful training course yesterday with great group of delegates, today I attended the OWASP AppSensor Project working group at AppSec USA 2011.

Photograph of downtown Minneapolis where the OWASP AppSec USA 2011 conference is being held

The AppSensor Summit was held to review the project's recent developments and activities, and to gather ideas from existing and new contributors to create a future roadmap. It was good to meet at long last John Melton, AppSensor's lead programmer, and catch up with Michael Coates and Ryan Barnett.

The summit also attracted a diverse range of developers, architects, users and security vendors. There were probably about 10-12 people attending all day, with a few more popping in and out as their timetables and other commitments allowed. The discussions defined the contents for a new book, an AppSensor development life cycle, an integration plan and a new concept to modularise the analysis engine to simplify integration with application software. The idea of creating a set of example usage profiles was also suggested. I think my name is down to help mainly with a new version of the AppSensor book, but I hope I can contribute with some of the interface definitions for interactions with the analysis engine, and possible signalling/exporting functionality.

The meeting notes are available, so if you have any comments or suggestions, please add them there, or discuss them via the project's mailing list.

The talks at the conference begin tomorrow.

Posted on: 21 September 2011 at 22:30 hrs

Comments Comments (1) | Permalink | Send Send | Post to Twitter

09 September 2011

Secure Web Application Development and Implementation

The UK's Centre for the Protection of National Infrastructure (CPNI) has updated its guidance on protecting business applications with the publication this month of a new document on developing and implementing secure web applications.

Partial image of the title sheet from the Centre for the Protection of National Infrastructure CPNI guidance document 'Development and Implementation of Secure Web Applications', published in August 2011

Development and Implementation of Secure Web Applications is a well-written and digestible 81-page A4 document arranged in seven main sections:

  • Introduction to web application security
  • General aspects of web application security
  • Access handling (authentication, session management and access control)
  • Injection flaws
  • Application users and security
  • Thick client security
  • Preparing the infrastructure

It appears to replace the good, but somewhat dated document "Briefing 10/2006 - Secure web Applications - Development, Installation and Security Testing" created by their predecessor National Infrastructure Security Co-ordination Centre (NISCC), and issued in April 2006. The new document is more compact and focused, and I think I prefer it. Yes of course it is more up-to-date, and while it would be possible to argue why some things are included and not others, these others things tend to be explained further in the references. It's clear there is considerable overlap with information from OWASP and the Microsoft SDL, but I'm sure the reverse is true to an extent too.

It is very encouraging CPNI have taken the time to produce an updated document, but that probably reflects the types of risks facing their audience. I am especially pleased to see the section on infrastructure, since application security cannot be an island on its own. I would say the guidance is probably on the medium-to-heavy weight side of advice, but that is probably appropriate for critical national infrastructure, and the document does discuss threat modelling initially. It might seem overwhelming to some organisations new to the subject, and that might need some help on what to do first.

I think the document could perhaps do with more cross-referencing to additional information resources elsewhere. Yes, documents can always be improved, and I am sure we will find niggles and faults with use, but threats evolve and so does our knowledge.

Posted on: 09 September 2011 at 20:00 hrs

Comments Comments (1) | Permalink | Send Send | Post to Twitter

30 August 2011

Common Event Expression (CEE) v0.6

Common Event Expression (CEE) Architecture Specification version 0.6 has been published for comment.

Partial image of the Common Event Expression (CEE) Architecture & Components diagram from Common Event Expression (CEE) Architecture version 0.6, showing the rleationship between security event, CEE event log recommendations (CELR), the taxonomy, CEE log syntax (CLS) and CEE log transport (CLT)

As noted noted in June, CEE defines the structure and components comprising the community-developed event log standard that intends to be industry accepted and practical. The following v0.6 documents were released on 26th August 2011:

I will be having a read through these to see how they can be applied to application logging in some upcoming projects.

Feedback is sought on these documents using the CEE Email Discussion List or by email to cee@mitre.org.

Posted on: 30 August 2011 at 08:06 hrs

Comments Comments (2) | Permalink | Send Send | Post to Twitter

More Entries

Monitoring : Web Security, Usability and Design
ISO/IEC 18004:2006 QR code for http://clerkendweller.com

Requested by 107.21.163.227 on Wednesday, 23 April 2014 at 16:32 hrs (London date/time)

Please read our terms of use and obtain professional advice before undertaking any actions based on the opinions, suggestions and generic guidance presented here. Your organisation's situation will be unique and all practices and controls need to be assessed with consideration of your own business context.

Terms of use http://www.clerkendweller.com/page/terms
Privacy statement http://www.clerkendweller.com/page/privacy
© 2008-2014 clerkendweller.com