04 April 2014

Validation

Posts relating to the category tag "validation" are listed below.

09 September 2011

Secure Web Application Development and Implementation

The UK's Centre for the Protection of National Infrastructure (CPNI) has updated its guidance on protecting business applications with the publication this month of a new document on developing and implementing secure web applications.

Partial image of the title sheet from the Centre for the Protection of National Infrastructure CPNI guidance document 'Development and Implementation of Secure Web Applications', published in August 2011

Development and Implementation of Secure Web Applications is a well-written and digestible 81-page A4 document arranged in seven main sections:

  • Introduction to web application security
  • General aspects of web application security
  • Access handling (authentication, session management and access control)
  • Injection flaws
  • Application users and security
  • Thick client security
  • Preparing the infrastructure

It appears to replace the good, but somewhat dated document "Briefing 10/2006 - Secure web Applications - Development, Installation and Security Testing" created by their predecessor National Infrastructure Security Co-ordination Centre (NISCC), and issued in April 2006. The new document is more compact and focused, and I think I prefer it. Yes of course it is more up-to-date, and while it would be possible to argue why some things are included and not others, these others things tend to be explained further in the references. It's clear there is considerable overlap with information from OWASP and the Microsoft SDL, but I'm sure the reverse is true to an extent too.

It is very encouraging CPNI have taken the time to produce an updated document, but that probably reflects the types of risks facing their audience. I am especially pleased to see the section on infrastructure, since application security cannot be an island on its own. I would say the guidance is probably on the medium-to-heavy weight side of advice, but that is probably appropriate for critical national infrastructure, and the document does discuss threat modelling initially. It might seem overwhelming to some organisations new to the subject, and that might need some help on what to do first.

I think the document could perhaps do with more cross-referencing to additional information resources elsewhere. Yes, documents can always be improved, and I am sure we will find niggles and faults with use, but threats evolve and so does our knowledge.

Posted on: 09 September 2011 at 20:00 hrs

Comments Comments (1) | Permalink | Send Send | Post to Twitter

19 August 2011

How Old Is the Internet?

Last night, I came across evidence of the oldest web site so far known.

Partial screen capture showing the start of the customer survey - the first question says 'According to our records, on 29/12/1899 you purchased ******* Is this correct? Note the tickets were purchased online at www.*******.co.uk', with three possible answers 'Yes, No, Don't know'

I had been sent a request to complete an online customer survey and duly clicked through to the online form. Clearly I am very old, and the web site even older. I did even wonder what the data retention policy is. Or maybe there's been a slight data import issue here? Applications need data validation in more places than just inputs from humans. Data from other systems, including so-called "trusted systems" can also be prone to errors, incompatibilities and troublesome content. And some of that can also be malicious. It needs to be defined properly and then validated.

I remember one of my own projects which threw an input validation error many years after it was deployed, because the system it was integrated with changed the format of their response codes. My application was accused of being "over engineered". Well, "fail-secure" I said. And in any case, prior to development, we had tried for a long time to get a specification for what codes to expect, but no-one had an answer, and we had to make some assumptions and put bounds on what was reasonable. It worked for 4 years, and was logging, but I admit it could have done with sending an alert on detection of an invalid response.

While the example of the customer survey above is just mildly amusing, it might hint at poor secure development practices — just the sort of thing malicious users might ponder how to exploit. I don't think there's any significant risk here, especially since the date appeared to be the only custom data in the survey.

Posted on: 19 August 2011 at 08:13 hrs

Comments Comments (0) | Permalink | Send Send | Post to Twitter

03 May 2011

Microsoft SDL Process Guidance Update 5.1

Microsoft has released their annual update to the Security Development Lifecycle (SDL) Process Guidance.

Photograph of a high barred access gate with signs saying 'Caution - Anti-climb Paint', 'Warning - These Premises Are Protected by...' and an observation mirror mounted on a wall in the background

SDL 5.1 includes several new, updated and promoted controls which probably reflect better more typical design and coding faults. For example in Phase 2 - Design, these have been added:

  • Mitigate against Cross-Site Scripting (XSS).
  • Apply no-open X-Download-Options HTTP header to user-supplied downloadable files.

In security controls for cryptography:

  • Provide support for certificate revocation.
  • Limit lifetimes for symmetric keys and asymmetric keys without associated certificates.
  • Support cryptographically secure versions of SSL (must not support SSL v2).
  • Use cryptographic certificates reasonably and choose reasonable certificate validity periods.

During Phase 3 - Implementation, the following requirements have been updated:

  • Use minimum code generation suite and libraries.
  • Compile native code with /GS compiler.
  • Use secure methods to access databases.

And still in Implementation, the following have been added/promoted:

  • Do not use Microsoft Visual Basic 6 to build products.
  • Ensure that regular expressions must not execute in exponential time.
  • Harden or disable XML entity resolution.
  • Use safe integer arithmetic for memory allocation for new code.
  • Use secure cookie over HTTPS.
  • AllowPartiallyTrustedCallersAttribute (APTCA) review.
  • Mitigate against cross-site request forgery (CSRF).
  • Load DLLs securely.
  • Minimum ATL Version and Secure COM Coding Requirements.
  • Reflection and authentication relay defense.
  • Sample code should be SDL compliant.
  • Internet Explorer 8 MIME handling: Sniffing OPT-OUT.
  • Safe redirect, online only.
  • Comply with minimal Standard Annotation Language (SAL) code annotation recommendations
  • Use HeapSetInformation.
  • COM best practices.
  • Restrict database permissions.
  • Use Transport Layer encryption securely.

And finally in Phase 4 - Verification:

  • File parsing.
  • Network fuzzing.
  • Binary analysis.

There are also some changes to the SDL-Agile Requirements.

So, quite a significant update really, with many good recommendations being added or improved upon. Whatever your programming language, it is worth cross-checking these items with your own coding standards.

Posted on: 03 May 2011 at 08:43 hrs

Comments Comments (0) | Permalink | Send Send | Post to Twitter

22 April 2011

State of Software Security Report Volume 3

The third semi-annual "State of Software Security Report - The Intractable Problem of Insecure Software" has been issued by Veracode (see my previous comments on volume 1 and volume 2).

Partial view of one of the figures in Veracode's report State of Software Security - Volume 3 Volume 3 provides further insight into the results of static binary, dynamic, and manual security testing of almost 5,000 applications over the last 18 months from Veracode's wide client base. The data covers both web and non-web application code in the most common programming languages: C/C++, ColdFusion, Java, .NET and PHP.

This report provides even more data on the types of vulnerabilities found and further comparison between applications by industry sector, company type, purpose, supplier type and time to acceptable quality. There is a wealth of statistics which will be useful to anyone looking to reduce software vulnerabilities including developers, testers and those in the information security industry. I'm particularly impressed by the thought that has gone into the design of the data-rich charts and the honesty about whether trends are statistically significant.

One aspect mentioned is that newer applications tested on first time submission are not much better than older ones (in this case "older" means "a year or so ago"). The reasons suggested are either lack of secure development practices, or such practices were performed but inadequately. But I wonder if this may be the result of Veracode's customers beginning to work backwards through their legacy applications, to assess and thus rank them for remediation effort? Therefore, these legacy applications will not have had the same degree of care and attention as perhaps more recently developed software.

The mine of information presented over 50 pages also discusses the relatively low level of security knowledge of developers, and the need to provide better awareness and training. But a new section in this report attempts to examine the remediation efforts. I really appreciate the effort that has gone into this and the presentation of so much data analysis. We have to thank Veracode's customers for allowing their data to be included in this aggregated data.

The report also discusses how there is a growing usage of third-party risk assessments, where the software is assessed independently using multiple testing techniques. In some sectors, software suppliers are increasingly being held accountable for the security quality of the applications they produce. I think that is a good thing.

While there is a comparison of different sectors, I wonder if it will be possible to delve greater into some details in future? For example, some large providers of outsourced development are also active in the software security space, and have their own products for static & dynamic security testing, and even provide software security consultancy services. Do those companies take their own medicine? Do they apply the knowledge and tools they offer in another part of their business in their own software development services? We probably won't find out any time soon, but it would be fascinating to know.

The report's data suggests web applications are still plagued by vulnerabilities such as cross-site scripting (XSS), information leakage and injection (SQL injection as CRLF injection). Meanwhile the most frequently found issues for non-web applications are buffer overflow, error handling and potential backdoors. Cryptographic issues are also very common. The majority of applications tested suffer from these well-known defects, and all of which are well documented and have a range of methods to solve them.

Good reading for the beach this weekend!

Posted on: 22 April 2011 at 12:45 hrs

Comments Comments (0) | Permalink | Send Send | Post to Twitter

23 November 2010

Inline Form Validation

Some form entry errors are due to human mistakes; these may be made worse or more likely by poor design. Can HTML 5 and CSS 3 help? A fairly recent article Forward Thinking Form Validation on A List Apart discusses how the new input types and attributes in HTML 5 combined with CSS 3's basic UI module can be used to guide users on how to complete form fields in real-time, making it less likely for them to make mistakes.

Photograph of a huge crowd attending the first Muse concert at Wembley Stadium in September 2010

The explanation and demonstration are quite compelling, but do read the associated comments. Client-side validation can reduce human error (e.g. typos), and the number of requests submitted to the server with invalid data (see Input Masking at the Web Browser, Don't Stop the Attack (Too Soon), Let Down By Customer Surveys, Email Address Formats, and Input Validation and Output Encoding). See also these recent blog posts elsewhere:

But do not expose all your business logic in client-side validation. For example, checking the general format of the account number would be reasonable, but the detailed validation (including authorisation) checks should only be undertaken server-side. This is because if the session or authorisation checks fail, it is difficult to handle this well dynamically on the form page—do we suddenly display an authentication form? Additional business logic on the client can also introduce new vulnerabilities.

The validation constraints also need to be flexible enough to cater for "valid", "reasonable" and "invalid" values. For example, you may want to store a telephone number without space characters (the valid format), but it is probably unreasonable to impose this on users who may well find it easier to type the spaces (or hyphens and parentheses) as they go. That would be a reasonable value, which client-side validation could allow, but is then accepted and sanitised before storage. When the telephone number is re-displayed it must be encoded appropriately, but it can also be formatted in the most legible way. Space characters apply in a similar way for credit card numbers.

And, always, undertake the same validation checks on the server too.

In my talk at AppSec Washington DC earlier this month (slides and workbook), I used the form validation as an example of how knowledge of any client-side validation may affect the response to data verification failures. In a form without client-side validation, a wide range of user input is not necessarily malicious.

Chart showing a range of unacceptable to acceptable user behaviour for invalid data received by an application with 'Form radio button element item value is not a positive, non zero, integer' well to the unacceptable end of the range, and 'Form TEXT element account code is a string but is the wrong format', 'Form TEXT element password value has trailing white space' and 'Form TEXT element phone number value contains a hyphen character' towards the acceptable end of the range

However, if there is fully enforced client-side format validation, the receipt of invalid data could be more suspicious. This is because a user would have to use special tools to achieve this effect.

Chart showing a range of unacceptable to acceptable user behaviour for invalid data, which has also bypassed client side validation received by an application with the same points as in the previous chart, but they are all now in the unacceptable area except for 'Form TEXT element password value has trailing white space'

The siutuation is different if the data being validation came from another computer system, rather than direct human entry.

All human users will react in different ways to forms, and depending on circumstances, there will be a variety of human errors submitted. Client side validation can help with the user experience but is not a replacement for server-side validation. Knowledge of the validation mechanisms can help make better decisions on the server about the type of response to make. But remember that each browser version may react in different ways, and as before, if you are using any JavaScript for validation, this may be turned off by some users. Humans make programming errors too!

Posted on: 23 November 2010 at 08:31 hrs

Comments Comments (0) | Permalink | Send Send | Post to Twitter

28 October 2010

Benign Unexpected URLs - Part 3 - Additional/Missing Parameters

Parts 1 and 2 provided information on some unexpected URLs that may also be valid entry points for an application. But are there also additional parameters and missing mandatory parameters which are possibly benign requests, and not an attack?

Yes, some parameter errors are not intentionally malicious. Like, any other user input they should be subject to the same types of validation checks, encoding and escaping as any other user-provided input. Some examples of additional/missing parameters are shown below.

Category Comments and examples
All POST Parameters Missing If all mandatory parameters of a request using the HTTP method "POST" are missing, this may be the case of a linked, bookmarked or indexed URL being requested at a later date.

Tracking Extra tracking parameters added by email marketing links, web adverts, RSS feeds, news readers, content syndication, etc, and may be sent as cookies or URL parameters.

?utm_source=feedburner&utm_medium=feed&utm_campaign=...&utm_content=...
__utma
__utmb
__utmc
__utmk
__utmv
__utmx
__utmz

Tracking parameters of any conceivable name may also be added directly into links within the site or from other sites by internal staff or external service providers.

?callback=...
?click=...
?promotion_code=GN6JW...
etc

Network Devices Parameters (often HTTP headers) added by trusted upstream network devices (e.g. proxy servers).

notified-SplashPage
notified-Splash_Page_Minimal
notified-SplashPages
notified-TermsAndAgreement
notified-VCN_Strict

Parameters added by untrusted network devices should not be assumed to be benign.

Cookies Additional cookies may be submitted in requests as a result of other applications on the same domain, marketing tracking, etc. Some web crawlers might not handle a site's cookies correctly and you may receive cookie names that are fragmented parts of the intended real cookies, e.g. with names like 'path', 'secure' and 'httponly'.

APIs If you expose a service using a third party's API and their API definition changes, you may receive requests with parameter names and values that you do not expect, or without ones you do expect. This might also occur where you use APIs elsewhere (e.g. in payment authentication web service) and there is a callbacks to your application (e.g. successful payment).

Malformed Query String Some web crawlers may request URLs with parameters that have encoded ampersand characters or parts of, as parameter names

?...amp=...
?...&=...

URL Truncation Truncation of the URL may mean some query string parameters and/or path parameters are lost.

New Functionality Change control processes should ensure that any definition of parameters is kept up-to-date as changes are made to an application. Problems in this area should be discovered during testing processes.

Please let me know additional items and I will add them to these posts.

Posted on: 28 October 2010 at 10:04 hrs

Comments Comments (0) | Permalink | Send Send | Post to Twitter

15 October 2010

Image Recognition CAPTCHAs

Web users should be very familiar with challenge-response CAPTCHA test which are also known as Human Interactive Proofs (HIPs).

Stage set at Muse's 2010 concert in Webley Stadium, London, showing a icon-style images of human shapes

A lot of time is spent devising CAPTCHAs and trying to break CAPTCHAs in an automated manner. They are used where website owners want to try to ensure a real person is undertaking a transaction such as logging in or submitting content. Now, most web sites don't need to use CAPTCHAs and there can be accessibility problems, but they are often needed for more high-profile sites or when targetted automated attacks are occurring.

A new paper Attacks and Design of Image Recognition CAPTCHAs examines image recognition CAPTCHAs (IRCs) and analyses the effectiveness and security of the schemes considering:

  • ease of use by humans
  • difficulty to automate
  • universality
  • reistance to no-effort attacks
  • scalability
  • use of a secret database.

The authors describe the lessons learned, some fundamental guidelines for IRCs and propose an alternative IRC which relies on recognising an object by exploiting its surrounding context. This requires the user (hopefully a human) to select an item from a set of objects detached from the original image, and then place it back at its original position in the image. Its usability, robustness and copyright issues are discussed.

Posted on: 15 October 2010 at 09:05 hrs

Comments Comments (0) | Permalink | Send Send | Post to Twitter

17 September 2010

OWASP AppSec Ireland 2010 - Part 2

After arriving in Dublin last night, I walked to Trinity College this morning and had a little time for a coffee and to greet people I knew before we moved into the lecture theatre.

Photograph from the presentation at AppSec Ireland 2010

Following the welcome to OWASP Ireland 2010 by Eoin Keary, Fabio Cerullo & Rahim Jina of the OWASP Ireland Board, John Viega delivered a thought-provoking keynote speech on "Application Security in the Real World". John described real-world problems and approaches to application security need to prove their value. He described seven practices: awareness & training, assessments & audits, development & Q&A, vulnerability response, operational security, compliance and security metrics which, when applied appropriately can demonstrate a return on investment.

Photograph from the presentation at AppSec Ireland 2010

OWASP Board members Eoin Keary & Dinis Cruz provided an overview of OWASP's current status, its activities including many of its projects and of the global committees. They described how OWASP's mission "is to make application security VISIBLE for buyers and INVISIBLE for developers". Samy Kamkar was given a brief slot to describe how cross site scripting (XSS) can be used against user's routers to eventually gain the MAC address and ultimately a user's geolocation using Google data.

Photograph from the presentation at AppSec Ireland 2010

After a short break and opportunity to look at the sponsor booths, the conference split into two streams for the rest of the morning. Fred Donovan spoke on the topic of "Counter Intelligence as a Defense", describing how gathering information and taking approved action can help identify, assess and potentially neutralise threats to an organisation's ability to conduct business, and to enhance the protection of corporate assets and customer data. He also described sources of information including web application firewalls (WAFs), server logs, application logs, the media, list servers, MITRE, honeypots and from the source of the threat itself. some of the impediments and do's and dont's in this pro-active approach.

Photograph from the presentation at AppSec Ireland 2010

Ryan Berg gave a lively and fast-paced description on the "Path to a Secure Application". He described what isn't working, and the need to mitigate the damage that attackers can do, rather than assuming you can keep them outside your network, He provided numerous examples of how security can be built into to all stages of the software development process, but made the point that organisations should make efforts to improve their existing application development processes, rather than creating new ones.

Photograph from the presentation at AppSec Ireland 2010

Dan Cornell described how Android and iPhone smartphone applications are coded, deployed and, how and when the source code can be reverse engineered. He presented an example Android application and some tools to demonstrate how embedded URLs, file paths and host names can be extracted to help determine its workings. He recommended that, like other applications, smartphone applications should undergo threat modelling, care should be taken on what information is stored and where, and to be careful when consuming any third-party services, and ensure that enterprise web services are approved and deployed securely.

Photograph from the presentation at AppSec Ireland 2010

After lunch which was held in the beautiful Dining Hall of Trinity College Dublin, Professor Fred Piper (Royal Holloway College) presented the second keynote on "The changing face of cryptography". Prof Piper described that people do not need to attack algorithms when they can attack the implementation or cryptographic system instead. He provided an engaging and personable talk about algorithms, implementation weaknesses, real-life cryptography and the related political and social issues, clearly demonstrating his wide and deep knowledge.

Tyler Shields' presentation "Application Security Scoreboard in the Sky" described the results from Veracode's State of Software Security, which I have discussed before but is worth remembering as a good source of information when building business cases for secure development processes. The first volume had examined the differences between open source, commercial and out-sourced software. The second volume is due to be released in the next fortnight.

Photograph from the presentation at AppSec Ireland 2010

Rory Alsop & Rory McCune (co-chairs of OWASP Scotland) "The 'Real' Application Security Pentest." described why penetration test companies and purchasers of their services need to understand the requirements clearly and to make best use of the budget. They described that penetration testing is increasingly being used but there are inconsistencies in how it is undertaken and customers don't always receive what they want. The speakers described common myths and what buyers should do.

Photograph from the presentation at AppSec Ireland 2010

Vinay Bansal and Martin Nystrom jointly presented Cisco's experiences of "How to Defend Fragile Web Applications". Cisco know they are constant attack on their perimeter, but they have to concentrate their resources on defending DMZ & internal systems and minimising the damage from compromises. Cisco use architectural assessments, developer training, secure coding techniques, verification practices and more recently using web application firewalls (WAFs) in a reverse proxy mode using Apache httpd, ModSecurity and ModProxy. They described the problems and benefits of using WAFs in front of Cisco's tens of thousands of applications, and how they are trialling using the WAFs for virtual patching, where the applications cannot be modified.

Photograph from the presentation at AppSec Ireland 2010

The final keynote "Hackers and Hollywood: The Implications of the Popular Media Representation of Computer Hacking" was presented by Damian Gordon (School of Computing, Dublin Institute of Technology). Damian gave a light-hearted look at "Hackers and Hollywood: The Implications of the Popular Media Representation of Computer Hacking". He has researched whether or not movies accurately portray hackers and the implications of that portrayal, based on filtering 200 potential movies down to 50 clearly relating to computer hackers and not just cyberpunk, sci-fi or where hacking is only a peripheral activity. The conclusion? Movies are doing quite well but are missing out on some hacking features such as denial of service, phishing, organisation identuty theft and e-harassment of employees. Maybe next year?

Congratulations for a very successful and informative day to all the organisers, helpers and speakers. A little late, but off to the social event...

Posted on: 17 September 2010 at 19:26 hrs

Comments Comments (0) | Permalink | Send Send | Post to Twitter

03 August 2010

Real World Enterprise Application Security Programmes

This year I have mentioned web application security programmes, how software vulnerability testing recommended risk-based, application security programmes and generalised results from a survey about web application security programs.

Photograph of a circular gauge labelled 'synchronisation meter' with a pointer sitting between 'slow' and 'fast' marked on the face, from the London Transport Museum in Covent Garden

But what are enterprises doing in real life and what are the issues? During the second day of OWASP AppSec Research 2010, Michael Craigue of Dell presented on Secure Application Development for the Enterprise: Practical, Real-World Tips. Although I missed it, people who did attend this track were enthusiastic about it and the video recording has now been published. I watched it last weekend.

Michael described Dell's 10-strong Global Information Security Services group and how it works with 3,000-5,000 developers in internal teams and how their appsec work is built on a published and maintained secure application development standard. Some of the problems encountered at Dell were platform diversity, security expert retention, the need to develop self-help documentation for the low and medium risk projects, lack of good metrics around security awareness training, high overhead of conventional threat modelling and the need to build security into the development lifecyle slowly, and in a business-focused manner.

At Dell, the project risk is calculated from ten factors including data classification, compliance requirements, whether it is externally facing, and the security knowledge of the development team. Interestingly, in the final questions from the audience, Michael mentioned Dell are using Open SAMM to identify gaps, measure how well their security programme is performing and to focus improvement efforts. Even projects that the group does not get involved with directly, are subject to quality checks and audit such as using Control Self Assessments (CSAs), which look for the artifacts required in the self-help documentation, even for low-risk applications.

There is another description of how software assurance practices at Ford in 2009, and recently published on US DHS's best practices web site Build Security In. The Ford programme is quite different. Every application security programme is unique because every organisation's culture, application and acceptance of risk is different.

What is yours like?

Posted on: 03 August 2010 at 09:00 hrs

Comments Comments (0) | Permalink | Send Send | Post to Twitter

02 July 2010

Web Site Security Basics for SMEs

Sometimes when I'm out socially and people ask what I do, the conversation progresses to concerns about their own web site. They may have a hobby site, run a micro-business or be a manager or director of a small and medium-sized enterprise (SME)—there's all sorts of great entrepreneurial activity going on.

It is very common for SMEs not to have much time or budget for information security, and the available information can be poor or inappropriate (ISSA-UK, under the guidance of their Director of Research David Lacey, is trying to improve this). But what can SMEs do about their web presence—and it is very unusual not to have a web site, whatever the size of business.

Photograph of a waste skip at the side of St John Street in Clerkenwell, London, UK, with the company's website address written boldly across it

Last week I was asked "Is using <company> okay for taking online payments?" and then "what else should I be doing?". Remember we are discussing protection of the SME's own web site, not protecting its employees from using other sites. If I had no information about the business or any existing web security issues, this is what I recommend checking and doing before anything else:

  • Obtain regular backup copies of all data that changes (e.g. databases, logs, uploaded files) and store these securely somewhere other than the host servers. This may typically be daily, but the frequency should be selected based on how often data changes and how much data the SME might be prepared to lose in the event of total server failure.
    • check backup data can read and restored periodically
    • don't forget to securely delete data from old backups when they are no longer required
  • Use a network firewall in front of the web site to limit public (unauthenticated user) access to those ports necessary to access the web site. If other services are required remotely, use the firewall to limit from where (e.g. IP addresses) these can be used.
    • keep a record of the firewall configuration up-to-date
    • limit who can make changes to the firewall
  • Ensure the host servers are fully patched (e.g. operating system, services, applications and supporting code), check all providers for software updates regularly and allow time for installing these.
    • remove or disable all unnecessary services and other software
    • delete old, unused and backup files from the host servers
  • Identify all accounts (log in credentials) that provide server access (not just normal web page access), such as used for transferring files, accessing administrative interfaces (e.g. CMS admin, database and server management/configuration control panels) and using remote desktop. Change the passwords. Keep a record of who has access and remove accounts that are no longer required and enable logging for all access using these accounts.
    • restrict what each account can do as much as possible
    • add restrictions to the use of these accounts (e.g. limit access by IP address, require written approval for use, keep account disabled by default)
  • Check that every agreement with third parties that are required to operate the web site are in the organisation's own name. These may include the registration of domain names, SSL certificates, hosting contracts, monitoring services, data feeds, affiliate marketing agreements and service providers such as for address look-up, credit checks and making online payments.
    • ensure the third parties have the organisation's official contact details, and not those of an employee or of the site's developers
    • make note of any renewal dates
  • Obtain a copy of everything required for the web site including scripts, static files, configuration settings, source code, account details and encryption keys. Keep this updated with changes as they are made.
    • verify who legally owns the source code, designs, database, photographs, etc.
    • check what other licences affect the web site (e.g. use of open source and proprietary software libraries, database use limitations).

Do what you can, when you can. Once those are done, then:

  • Verify the web site and all its components (e.g. web widgets and other third party code/content) does not include common web application vulnerabilities that can be exploited by attackers (e.g. SQL injection, cross-site scripting).
  • Check what obligations the organisation is under to protect business and other people's data such as the Data Protection Act, guidance from regulators, trade organisation rules, agreements with customers and other contracts (e.g. PCI DSS via the acquiring bank).
    • impose security standards and obligations on suppliers and partner organisations
    • keep an eye open for changes to business processes that affect data
  • Document (even just some short notes) the steps to rebuild the web site somewhere else, and to transfer all the data and business processes to the new site.
    • include configuration details and information about third-party services required
    • think about what else will need to be done if the web site is unavailable (does it matter, if so what exactly is important?)
  • Provide information to the web site's users how to help protect themselves and their data.
    • point them to relevant help such as from GetSafeOnline, CardWatch and Think U Know
    • provide easy methods for them to contact the organisation if they think there is a security or privacy problem
  • Monitor web site usage behaviour (e.g. click-through rate, session duration, shopping cart abandonment rate, conversion rate), performance (e.g. uptime, response times) and reputation (e.g. malware, phishing, suspicious applications, malicious links) to gather trend data and identify unusual activity.
    • web server logs are a start, but customised logging is better
    • use reputable online tools (some of which are free) to help.

That's just the basics. So, what would be next for an SME? If the web site is a significant sales/engagement channel, the organisation has multiple web sites, is in a more regulated sector or one that is targetted particularly by criminals (e.g. gaming, betting and financial), takes payments or does other electronic commerce, allows users to add their own content or processes data for someone else, the above is just the start. Those SMEs probably need to be more proactive.

This helps to protect the SME's business information, but also helps to protect the web site users and their information. After all, the users are existing and potential customers, clients and citizens.

Oh, the best response I had to someone when I was explaining my work: "You're an anti-hacker than?". Well, I suppose so, but it's not quite how I'd describe it.

Any comments or suggestions?

Posted on: 02 July 2010 at 08:18 hrs

Comments Comments (0) | Permalink | Send Send | Post to Twitter

More Entries

Validation : Web Security, Usability and Design
ISO/IEC 18004:2006 QR code for http://clerkendweller.com

Requested by 107.21.136.116 on Saturday, 19 April 2014 at 05:41 hrs (London date/time)

Please read our terms of use and obtain professional advice before undertaking any actions based on the opinions, suggestions and generic guidance presented here. Your organisation's situation will be unique and all practices and controls need to be assessed with consideration of your own business context.

Terms of use http://www.clerkendweller.com/page/terms
Privacy statement http://www.clerkendweller.com/page/privacy
© 2008-2014 clerkendweller.com