18 May 2012

Availability

Posts relating to the category tag "availability" are listed below.

18 May 2012

Client-Side Storage in HTML5

Client-side, or local, storage is an area of concern for privacy and security. Therefore I was keen to attend the latest meeting of the London Web Performance Group titled HTML5 and Localstorage - Storage in the Browser at the Lamb Tavern (building c1780, but on the same site since 1309) in Leadenhall Market on Wednesday evening.

Photograph of many drawers in a filing cabinet labelled with journal dates

I almost changed my mind as I was also tempted to attend another local event on the same evening about NoSQL for Java Developers. Anyway I was very pleased I went to the client-side storage event, but it was so well-attended I almost did not have a seat. As usual, Stephen Thair (@TheOpsMgr) had done a great job organising the event.

Andrew Betts (@triblondon) described his experiences developing HTML5 applications for mobile devices, avoiding native code whenever possible, so that content could be available when the device is offline or in poor signal areas by using client-side storage. He described the pros and cons of using HTTP cookies, Indexed Database API (IndexedDB), Web SQL Database (WebSQL), local storage (key/value store) and Application Cache (or AppCache). Well the answer of which to use is "all of them". Andrew described how the FT.com application makes use of each type's advantages, to combine together into a responsive and network-robust application suitable for the most frequent and demanding of users. Therefore cookies are used for session management, AppCache for a default fallback page, local storage for static content such as HTML scaffolding, JavaScript and style sheets, and IndexedDB/WebSQL for the HTML content of pages. Thus they manage to fit the application into the HTML5 constraints imposed by different operating systems.

He explained many of the techniques used to circumvent mobile network and device-specific issues, but also explained how they managed to squeeze extra storage by compressing content as ASCII or base64 encoded data into JavaScript's UTF-16 double-byte encoding. It is a very clever piece of optimisation, which could also be used for code obfuscation. Details in the presentation slides.

I think users of client storage will have to be careful if it might be determined to be tracking technology. In the FT.com application case, this client storage is not offered to casual web site users, but only to those who have installed the app, are registered and log in. Thus there are opportunities to obtain consent, over and above any warning the device may offer. We are expecting to hear more about the ICO's plans for enforcement of the new regulations at a press conference this morning. Other HTML5 security issues are of course still a concern here. I was slightly troubled by one feature mentioned.

The presenter's slides are now available.

Posted on: 18 May 2012 at 09:05 hrs

Comments Comments (0) | Permalink | Send Send | Post to Twitter

17 February 2012

APM Through the SDLC

On Wednesday evening I attended another meeting of the London Web Performance Group at the Lamb Tavern in Leadenhall Market.

Photograph of the speaker Martin Pinner and London Web Performance Group organiser Stephen Thair at the Lamb Tavern in Leadenhall Market, London, 15th February 2012

The subject was Application Performance Management (APM) across the Software Development Life Cycle (SDLC). Martin Pinner described a history of application performance & service availability measurement and management, and how it includes end user experience monitoring, transaction profiling, application discovery & instrumenting, deep-level component monitoring and analytics. He explained that APM needs to be addressed through the SDLC — during development, in test and under operation — across all architectural tiers, and across development, staging/UAT and production environments.

At one point he surveyed the audience of about what technologies they were working with for web, application and database servers:

  • Apache HTTPD was most in use, far ahead of IIS and anything else
  • PHP and Java were roughly equally used, trailed by .Net and then others like Node.js and C++
  • MySQL was most in use, followed by MS SQL Server, with a small number of people using everything else (Oracle, DB2, CouchDB, MongoDB, Hadoop systems, etc)

The presentation included pointers to many useful free and commercial products for different APM requirements, and rather than trying to repeat that, you will be able to download the slides once have been published (I will update this post).

Photograph of the ticket and name badge for the London Web Performance Group's meeting 'APM across the lifecycle' on 15th February 2012

A friendly group, and much for me to learn about in this area.

Posted on: 17 February 2012 at 06:05 hrs

Comments Comments (0) | Permalink | Send Send | Post to Twitter

08 April 2011

Three ENISA Reports

Spring does seem to be a popular time for organisations and vendors to issue reports. I'm sure we'll be seeing more in the run up to Infosec Europe 2011, but I will keep you informed of anything topical.

Photograph of primroses and hyacinths flowering in the University Parks, Oxford

The European Network and Information Security Agency (ENISA) issued Data Breach Notification Insights in January and in the last few weeks has issued three other reports which may be of some general interest.

Botnets: Measurement, Detection, Disinfection and Defence

Botnets: Measurement, Detection, Disinfection and Defence describes 25 different practices to measure, detect and defend against botnets. These are discussed under the objectives of mitigating existing botnets, preventing new infections and minimising the profitability of botnets and cybercrime. The recommendations are then discussed for particular groups — regulators & law enforcement, internet service providers, researchers, end users and companies.

Mapping Security Services to Authentication Levels

This report examines e-identity management in the European Union, and in particular the activities of STORK (Secure idenTity acrOss boRders linKed). It looks at the various authentication levels and their mapping to public electronic services in the eGovernment programme framework.

Resilience of the Internet Interconnection Ecosystem

The authors of the latest ENISA report discuss Internet vulnerabilities, concerns about the sustainability of current business models, and the interactions of dependencies and economics. They describe the issues and propsoe eleven recommendations to increase the Internet's resilience.

Posted on: 08 April 2011 at 09:08 hrs

Comments Comments (0) | Permalink | Send Send | Post to Twitter

21 December 2010

Google Search Security Notifications

Last week, Google announced an additional tier of user security notification in its search results. Sites which Google believes have been hacked or otherwise compromised, but do not yet host malware may be marked with "This site may be compromised" on search engine result pages.

Diagram showing how a normal website may go straight from 'normal status' to being excluded from the index and search result listings; the site may also be marked as 'compromised' or 'hosting malware' - once resolved, compromised and excluded sites can be submitted using the 'reconsideration review' process whereas sites which were affected with malware need to request a review.

This status is not as severe as notifying users that the site hosts malware, when "This Site May Harm Your Computer" is displayed, but take it as an important warning. Compromise often leads to malware hosting. See my previous post about suggestions on to prepare for such an event — these are identical for "This site may be compromised".

Unlike requesting a review after malware has been cleaned up, the process for recovering a clean status in Google for a previously compromised site, uses the Request Reconsideration Form.

Google may also remove sites completely from its indexes and search results. This could be due to not having access, content such as malware, incorrect use of the robots exclusion standard, incomplete site maps, incorrect HTTP status codes, or other reasons that lead to a breach of its webmaster guidelines. Sites may also be removed or excluded due to legal action (e.g. if Google receives a Cease and Desist Notice - examples).

There is another tier which doesn't really fit in the above diagram — sites which use common application software which is out-of-date or which is known to contain security vulnerabilities, may receive WebMaster Tools messages, but this information is not currently displayed to search engines users.

Remember, just because Google has not detected use of old/vulnerable application software, or detected compromise or detected malware, this doesn't mean these none of these are true. Verify your own web applications, and have a plan in place in case any of these occur. Oh, and make someone accountable.

Posted on: 21 December 2010 at 09:00 hrs

Comments Comments (0) | Permalink | Send Send | Post to Twitter

13 July 2010

Application Situational Awareness

Knowledge of application context is used routinely in mobile applications—for example sensing a user's context (e.g. location and physical actions, time, etc), reducing network usage during periods of inactivity and designing for users. But how does this idea transfer to the server?

Photograph of computer circuit board

I almost called this environmental awareness, but didn't want to cause confusion with discussions about network/server environments. By 'situational awareness' I mean awareness of factors external to the application that might be used to affect its behaviour. In my talk this week about application intrusion detection, I will be discussing how an aspect such as the general risk level to an organisation/application might be used to alter an application's actions (e.g. amount of logging, attack detection thresholds). But this awareness, can be used beyond attacker detection and response.

Information is knowledge and additional awareness of external factors can be used to control changes to the application. An adaptive application can learn change in response to outside factors. And no, I don't mean displaying an intrusive and annoying paperclip that says "It looks like you're writing a letter". Apart from standard functionality the user sees, some ways your application may already be doing this are:

  • customising content based on:
    • geo-location information
    • user preferences
    • device type (e.g. mobile), browser and screen resolution
    • typical user behaviour
  • implementation of additional delays for failed attempts at authentication
  • use of reputation-based systems
  • displaying the number/identities of active/logged-in users
  • detecting usage of the application by users from a different location than they had used previously (e.g. IP address)
  • showing advertising based on users' behavioural characteristics.

But what else can be done? I remember chatting with someone during an unexpected period of severe weather which had disrupted travel in south-east England one morning. They had explained that in situations like this when their call centre was under staffed, they had procedures in place to reduce the length of each customer call, by shortening their own scripts taking out offers for helping with anything else and cross-selling/up-selling. The dialogue script was adapted to the situation. A web application could respond in a similar way during increasing, and higher periods of demand, to increase availability:

  • switch to more static content (e.g. change the home page to static HTML rather than a scripted dynamic page)
  • swap to lower bandwidth assets (e.g. display photographs instead of videos, use lower resolution photos)
  • use third-party servers for some content (e.g. video on YouTube)
  • reduce the size of pages and number of page elements by dropping out non-core material (e.g. promotional items, banners)
  • increase caching
  • delay non-core server intensive activities (e.g. management report generation)
  • provide links to printable forms to divert some or all users of a particular online service.

Similarly, if a local (e.g. dynamic PDF creation or chart generation), back-office (e.g. data archive) or third-party service (e.g. payment authorisation, address look-up) is detected as running slowly or has become unavailable, some of the following may be possible:

  • switch to cached data
  • add a queue to access the function
  • slow down the speed at which users can undertake the function
  • offer alternative (quicker) ways to complete the transaction
  • take the service offline, but offer to email users back when it is available again.

Similar changes could occur in advance of, or during, known scheduled application maintenance periods:

  • advanced warning notices to users
  • timed count-down to function or application shutdown
  • preventing users beginning new tasks which might not be able to be completed before the shutdown
  • ability for users to request notification that the service is back up.

The important thing (remember "clippy") is not to change the user experience too noticeably, and where there is a significant change (e.g. download the form instead of doing it online), provide a time-stamped explanation of the change and reasons.

These measures all bring complexity, and it is important they do not introduce additional vulnerabilities to the application. The problems are quite likely to be in authentication, authorisation and session management and need to be identified during security specification and verification processes. The effect on data integrity, including accuracy, also needs to be considered. But the measures are worth considering where the alternative is additional standby staff and increased usage of other channels.

Posted on: 13 July 2010 at 09:30 hrs

Comments Comments (0) | Permalink | Send Send | Post to Twitter

02 July 2010

Web Site Security Basics for SMEs

Sometimes when I'm out socially and people ask what I do, the conversation progresses to concerns about their own web site. They may have a hobby site, run a micro-business or be a manager or director of a small and medium-sized enterprise (SME)—there's all sorts of great entrepreneurial activity going on.

It is very common for SMEs not to have much time or budget for information security, and the available information can be poor or inappropriate (ISSA-UK, under the guidance of their Director of Research David Lacey, is trying to improve this). But what can SMEs do about their web presence—and it is very unusual not to have a web site, whatever the size of business.

Photograph of a waste skip at the side of St John Street in Clerkenwell, London, UK, with the company's website address written boldly across it

Last week I was asked "Is using <company> okay for taking online payments?" and then "what else should I be doing?". Remember we are discussing protection of the SME's own web site, not protecting its employees from using other sites. If I had no information about the business or any existing web security issues, this is what I recommend checking and doing before anything else:

  • Obtain regular backup copies of all data that changes (e.g. databases, logs, uploaded files) and store these securely somewhere other than the host servers. This may typically be daily, but the frequency should be selected based on how often data changes and how much data the SME might be prepared to lose in the event of total server failure.
    • check backup data can read and restored periodically
    • don't forget to securely delete data from old backups when they are no longer required
  • Use a network firewall in front of the web site to limit public (unauthenticated user) access to those ports necessary to access the web site. If other services are required remotely, use the firewall to limit from where (e.g. IP addresses) these can be used.
    • keep a record of the firewall configuration up-to-date
    • limit who can make changes to the firewall
  • Ensure the host servers are fully patched (e.g. operating system, services, applications and supporting code), check all providers for software updates regularly and allow time for installing these.
    • remove or disable all unnecessary services and other software
    • delete old, unused and backup files from the host servers
  • Identify all accounts (log in credentials) that provide server access (not just normal web page access), such as used for transferring files, accessing administrative interfaces (e.g. CMS admin, database and server management/configuration control panels) and using remote desktop. Change the passwords. Keep a record of who has access and remove accounts that are no longer required and enable logging for all access using these accounts.
    • restrict what each account can do as much as possible
    • add restrictions to the use of these accounts (e.g. limit access by IP address, require written approval for use, keep account disabled by default)
  • Check that every agreement with third parties that are required to operate the web site are in the organisation's own name. These may include the registration of domain names, SSL certificates, hosting contracts, monitoring services, data feeds, affiliate marketing agreements and service providers such as for address look-up, credit checks and making online payments.
    • ensure the third parties have the organisation's official contact details, and not those of an employee or of the site's developers
    • make note of any renewal dates
  • Obtain a copy of everything required for the web site including scripts, static files, configuration settings, source code, account details and encryption keys. Keep this updated with changes as they are made.
    • verify who legally owns the source code, designs, database, photographs, etc.
    • check what other licences affect the web site (e.g. use of open source and proprietary software libraries, database use limitations).

Do what you can, when you can. Once those are done, then:

  • Verify the web site and all its components (e.g. web widgets and other third party code/content) does not include common web application vulnerabilities that can be exploited by attackers (e.g. SQL injection, cross-site scripting).
  • Check what obligations the organisation is under to protect business and other people's data such as the Data Protection Act, guidance from regulators, trade organisation rules, agreements with customers and other contracts (e.g. PCI DSS via the acquiring bank).
    • impose security standards and obligations on suppliers and partner organisations
    • keep an eye open for changes to business processes that affect data
  • Document (even just some short notes) the steps to rebuild the web site somewhere else, and to transfer all the data and business processes to the new site.
    • include configuration details and information about third-party services required
    • think about what else will need to be done if the web site is unavailable (does it matter, if so what exactly is important?)
  • Provide information to the web site's users how to help protect themselves and their data.
    • point them to relevant help such as from GetSafeOnline, CardWatch and Think U Know
    • provide easy methods for them to contact the organisation if they think there is a security or privacy problem
  • Monitor web site usage behaviour (e.g. click-through rate, session duration, shopping cart abandonment rate, conversion rate), performance (e.g. uptime, response times) and reputation (e.g. malware, phishing, suspicious applications, malicious links) to gather trend data and identify unusual activity.
    • web server logs are a start, but customised logging is better
    • use reputable online tools (some of which are free) to help.

That's just the basics. So, what would be next for an SME? If the web site is a significant sales/engagement channel, the organisation has multiple web sites, is in a more regulated sector or one that is targetted particularly by criminals (e.g. gaming, betting and financial), takes payments or does other electronic commerce, allows users to add their own content or processes data for someone else, the above is just the start. Those SMEs probably need to be more proactive.

This helps to protect the SME's business information, but also helps to protect the web site users and their information. After all, the users are existing and potential customers, clients and citizens.

Oh, the best response I had to someone when I was explaining my work: "You're an anti-hacker than?". Well, I suppose so, but it's not quite how I'd describe it.

Any comments or suggestions?

Posted on: 02 July 2010 at 08:18 hrs

Comments Comments (0) | Permalink | Send Send | Post to Twitter

23 October 2009

Clocks go back this weekend

This weekend the clocks change as we revert from British Summer Time (BST) to Greenwich Mean Time (GMT) at 02:00 BST on Sunday 25 October 2009 and the clocks go back, giving an extra hour.

What does this mean for web site security? Does running 01:00 to 02:00 twice matter? Well some brave web application owners will be disabling their systems like this online bank:

Partial screen capture of a web page notification message saying 'Important information regarding Internet Banking - Please note that the Internet Banking service will be temporarily unavailable due to essential maintenance from 12am until 3am on Sunday 25th October 2009. We apologise for any inconvenience this may cause.'

And, I don't think it's just being done as a finale to the current Energy Saving Week. Most people, quite rightly, won't be taking this rather severe step. Another millennium bug anyone? The date/time should be considered rather like other untrusted user input. Most problems will probably fall into the "business logic" category such as:

  • Failure of time-based logic where dates are being compared.
  • Assumptions of uniqueness in time-stamped output (e.g. by a single-threaded process).
  • Running tasks again leading to possible:
    • loss of data due to overwriting
    • duplication of exports or emails
    • creation of inaccuracies in management information.
  • Chronological ordering anomalies leading to other faults.

It's not just banks and other financial organisations that may have difficulties.

Partial screen capture of a web page notification message saying 'Whats New ... Website Downtine - The website will be unavailable on Sunday 25 October 2009 and for a short period of time on the evenings of Friday 6 November 2009 and Sunday 8 November 2009 for essential maintenance. Please accept our apologies for any inconvenience.'

The time change may expose some other vulnerabilities that only exist at changeover time and/or during the next overlap hour.

  • Circumvention of brute force attacks on user authentication mechanisms.
  • Increased risk due to extension of a session's validity where local time is recorded.
  • Failure in data validation routines for time-related comparisons.
  • Incubated vulnerabilities where a time-related aspect causes the attack to be possible.
  • Denial of service due to extension of account lock-out.
  • Using time as a loop counter.
  • Additional errors caused by any of the above leading to information leakage.

Recording the offset of local time to GMT/UTC and synchronisation should certainly be done, but may not resolve the time overlap issues. The effects on long-running "saga" requests might be especially difficult to determine. Time dependencies need to be specified and considered through the development lifecycle. Perhaps the bank is right after all?

Partial screen capture of a web page notification message saying 'Alcohol & Tobacco Warehousing Declarations (ATWD) ... Saturday 24 October 23:30 – Sunday 25 October 03:30 ... Due to essential maintenance customers will experience a delay in receiving their online acknowledgement to submissions made using our HMRC and commercial software between 23:30 on Saturday 24 October and 03:30 on Sunday 25 October. Your acknowledgement will be sent once the service is restored. Please do not attempt to resubmit your submission. We apologise for any inconvenience this may cause. '

Posted on: 23 October 2009 at 08:41 hrs

Comments Comments (0) | Permalink | Send Send | Post to Twitter

29 September 2009

IP Address Restrictions and Exceptions

It's common for access to some web sites to be restricted to users from particular Internet Protocol (IP) addresses. This is usually in addition to some other identification and authentication method. But other IP addresses are often added to this "allow list" and these should not necessarily be trusted in the same way.

Photograph of a sign with an exclamation mark on a yellow triangle that reads 'Caution - Traffic management Trial - DO NOT MOVE' on a construction site boundary's wire barrier

In a typical scenario, a web site hosted on the internet that is used to administer another web application might be restricted to the company's own IP addresses. Then the developers say they need to check something on the live site, or another server needs to index the content, or someone wants to work from home for a while, or the site needs to be demonstrated at a client's location. All these additional IP addresses are added to the "allow list". These restrictions may be being applied at a network firewall, traffic management system, at the web server, in the application itself, in intrusion detection systems or in log analytical software, or in many of these. These are difficult to manage and in time there will be many IP addresses that no-one knows why they are allowed unless they are carefully documented, and subject to a fixed time limit when they are confirmed again by an appropriate person or removed. These extra addresses are quite often hard for someone else to guess.

However, there is another area where IP addresses are added to "allow lists", and this is for remote monitoring and testing services. These might be checking uptime, response times, content changes, HTML validation or security testing. The service providers publish the IP addresses of the source systems so that companies can specifically allow access to their web sites. Since the number of these services is relatively small, it's not too difficult to find which one might give access to areas of a web site or web application that the public (and malicious people) should not be able to get to. The particular danger here is that the IP addresses might be excluded from monitoring and logging, and therefore even a diligent web site manager might not realise for example the uptime monitoring service is making unusual, or excessive, requests.

Although it is not likely a malicious person is using this "trusted" address unless routing has been compromised as well, problems can go undetected, from what might seem to be a legitimate source. The IP address may have been typed incorrectly, or worse, the restrictions/exceptions may not have been implemented correctly allowing more addresses to have the privileged access than intended. Not logging a user's session is privileged access.

Allow traffic through, but be very specific what is allowed and monitor what's going on. Review all the exceptions periodically. Be especially careful about anything that bypasses authentication (such as allowing a search engine to crawl restricted-access content) on an otherwise public site.

Posted on: 29 September 2009 at 10:18 hrs

Comments Comments (0) | Permalink | Send Send | Post to Twitter

11 August 2009

The Good and the Bad of URL Shorteners

An article on eConsultancy.com discussing the demise of the tr.im URL shortening service mentioned some information security concerns:

...it's impossible to know where they are linking and they can contribute to the spread of malware. But their ease of use keeps growing their popularity. And the companies keep track of all of the information that is shared through shortening, which will be very valuable soon.

There are good discussions concerning URL shorteners on Joshua Schachter's blog, the Mashable social media guide and the Security Retentive corporate information security blog. These will be a little technical for many people but essentially, apart from the privacy aspect of collecting data, other things might be worth worrying about more (you need to check this!). If you can, create your own aliasing/redirecting system on your own domain. But make sure it can only be used to redirect to a limited number of your own website's URLs.

Interestingly, tr.im is another example where organisations may have been using a service for which they had no contract, service level agreement or rights when the service was withdrawn. Be careful what you reply upon.

Posted on: 11 August 2009 at 18:30 hrs

Comments Comments (1) | Permalink | Send Send | Post to Twitter

11 June 2009

100,000 Web Sites Lost

The news that a the UK hosting company VAServ lost 100,000 web sites all at once is devastating for the organisations involved. It appears that many cannot be recovered and a considerable number do not have recent backups.

From the temporary status page dated 10th June:

We have worked tirelessly through the night and over the last 48 hours to recover as many VPS as possible. However, we have now reached the end of all of our servers, and as such, if your server is not currently up, or not partly up (i.e. it is up but not working due to a configuration issue) then it is unfortunate that you will have lost your data due to this third party attack.

The event was widely reported:

Particularly sobering is the news that the CEO of LxLabs, implicated as the developers of the software that was hacked, has committed suicide:

Even if you don't have a formal disaster recovery plan, at least make sure you have backups of all your site code, database and other data.

Posted on: 11 June 2009 at 09:46 hrs

Comments Comments (0) | Permalink | Send Send | Post to Twitter

More Entries

Availability : Web Security, Usability and Design
http://www.clerkendweller.com/Availability
ISO/IEC 18004:2006 QR code for http://clerkendweller.com

Page http://www.clerkendweller.com/Availability
Requested by 50.17.107.233 on Friday, 18 April 2014 at 18:16 hrs (London date/time)

Please read our terms of use and obtain professional advice before undertaking any actions based on the opinions, suggestions and generic guidance presented here. Your organisation's situation will be unique and all practices and controls need to be assessed with consideration of your own business context.

Terms of use http://www.clerkendweller.com/page/terms
Privacy statement http://www.clerkendweller.com/page/privacy
© 2008-2014 clerkendweller.com