09 April 2014


Posts relating to the category tag "monitoring" are listed below.

09 April 2014

Third-Party Tracking Cookie Revelations

A new draft paper describes how the capture of tracking cookies can be used for mass surveillance, and where other personal information is leaked by web sites, build up a wider picture of a person's real-world identity.

Title page from 'Cookies that give you away: Evaluating the surveillance implications of web tracking'

Dillon Reisman, Steven Englehardt, Christian Eubank, Peter Zimmerman, and Arvind Narayanan at Princeton University's Department of Computer Science investigated how someone with passive access to a network could glean information from observing HTTP cookies in transit. The authors explain how pseudo-anonymous third-party cookies can be tied together without having to rely on IP addresses.

Then, given personal data leaking over non-SSL content, this can be combined into a larger picture of the person. The paper assessed what personal information is leaked from Alexa Top 50 sites with login support.

This work is likely to attract the attention of privacy advocates and regulators, leading to increased interest in cookies and other tracking mechanisms.

The research work was motivated by two leaked NSA documents.

Posted on: 09 April 2014 at 10:02 hrs

Comments Comments (0) | Permalink | Send Send | Post to Twitter

04 April 2014

Regulation of Software with a Medical Purpose

I seem to have a series of regulation-related posts at the moment. Perhaps the time of year. An article on OutLaw.com discusses how mobile apps and other software medical purpose may be subject to regulation.

Photograph of shelves in a shop displaying rows of medications

The UK's Medicines and Healthcare Products Regulations Agency (MHRA) is responsible for regulating all medicines and medical devices in the UK by ensuring they work and are acceptably safe. It has issued new guidance on "medical device stand-alone software (including apps)" which is defined as "software which has a medical purpose which at the time of it being placed onto the market is not incorporated into a medical device". Thus "software... intended by the manufacturer to be used for human beings for the purpose of:

  • diagnosis, prevention, monitoring, treatment or alleviation of disease,
  • diagnosis, monitoring, treatment, alleviation of or compensation for an injury or handicap,
  • investigation, replacement or modification of the anatomy or of a physiological process,
  • control of conception..."

Guidance on Medical Device Stand-alone Software (Including Apps) describes the scope, requirements and software-specific considerations. Product liability and safety considerations are also mentioned.

This introduces the potential need for registration, documentation, self-assessment, validation, monitoring and incident reporting, especially if the software performs any form of diagnosis or assessment. The OutLaw.com article provides a good analysis and views from experts.

Posted on: 04 April 2014 at 10:11 hrs

Comments Comments (0) | Permalink | Send Send | Post to Twitter

07 March 2014

PCIDSS SAQ A-EP and SAQ A: Comparison of Questions

PCIDSS SAQ A-EP and SAQ A are very different in PCIDSS version 3.0, and there are some minor changes between SAQ A versions 2.0 and 3.0.

SAQ A-EP has been developed to address requirements applicable to e-commerce merchants with a website(s) that does not itself receive cardholder data but which does affect the security of the payment transaction and/or the integrity of the page that accepts the consumer's cardholder data.

In the table below, "Y" indicates the question is included in the SAQ. The question text is taken from PCIDSS v3.0, and there are some numbering differences with version 2.0 under requirement 9. There are an order of magnitude more questions on the Self-Assessment Questionnaire (SAQ) for "Partially Outsourced E-commerce Merchants Using a Third-Party Website for Payment Processing" (SAQ-EP).

See my previous post for information about the SAQ A-EP eligibility criteria for e-commerce merchants and another post providing an introduction to the change.

Do all these questions apply to your own web site/e-commerce environment? The only answer to this is what your acquirer or payment brand requires of you, in your region (e.g. Europe). It is possibly the case that ecommerce-only merchants with fewer transactions (such as levels 3 and 4), might be asked to use the an acquirer's risk-based approach or only certain milestones in the PCIDSS prioritised approach.

And of course some questions may relate to PCIDSS requirements that are deemed not applicable to your environment, when the "N/A" option is then selected and the "Explanation of Non-Applicability" worksheet in Appendix C of SAQ A-EP is completed foreach "N/A" entry.

And to limit the PCIDSS scope, segmentation will be required to isolate the relevant e-commerce systems from other system components (see eligibility criteria), preferably also isolating as much of the non e-commerce aspects of the website. However, most of the designated PCIDSS requirements ought to be in place for security reasons anyway? Hopefully.

PCIDSS Self-Assessment Questionnaire (SAQ) Question v2.0 v3.0
1.1.4 (a) Is a firewall required and implemented at each Internet connection and between any demilitarized zone (DMZ) and the internal network zone? Y
(b) Is the current network diagram consistent with the firewall configuration standards? Y
1.1.6 (a) Do firewall and router configuration standards include a documented list of services, protocols, and ports, including business justification (for example, hypertext transfer protocol (HTTP), Secure Sockets Layer (SSL), Secure Shell (SSH), and Virtual Private Network (VPN) protocols)? Y
(b) Are all insecure services, protocols, and ports identified, and are security features documented and implemented for each identified service? Y
1.2 Do firewall and router configurations restrict connections between untrusted networks and any system in the cardholder data environment as follows:
Note: An "untrusted network" is any network that is external to the networks belonging to the entity under review, and/or which is out of the entity's ability to control or manage.
1.2.1 (a) Is inbound and outbound traffic restricted to that which is necessary for the cardholder data environment? Y
(b) Is all other inbound and outbound traffic specifically denied (for example by using an explicit "deny all" or an implicit deny after allow statement)? Y
1.3.4 Are anti-spoofing measures implemented to detect and block forged sourced IP addresses from entering the network? (For example, block traffic originating from the internet with an internal address) Y
1.3.5 Is outbound traffic from the cardholder data environment to the Internet explicitly authorized? Y
1.3.6 Is stateful inspection, also known as dynamic packet filtering, implemented--that is, only established connections are allowed into the network? Y
1.3.8 (a) Are methods in place to prevent the disclosure of private IP addresses and routing information to the Internet?
Note: Methods to obscure IP addressing may include, but are not limited to:
* Network Address Translation (NAT) * Placing servers containing cardholder data behind proxy servers/firewalls,
* Removal or filtering of route advertisements for private networks that employ registered addressing, Internal use of RFC1918 address space instead of registered addresses.
(b) Is any disclosure of private IP addresses and routing information to external entities authorized? Y
2.1 (a) Are vendor-supplied defaults always changed before installing a system on the network?
This applies to ALL default passwords, including but not limited to those used by operating systems, software that provides security services, application and system accounts, point-of-sale (POS) terminals, Simple Network Management Protocol (SNMP) community strings, etc.).
(b) Are unnecessary default accounts removed or disabled before installing a system on the network? Y
2.2 (a) Are configuration standards developed for all system components and are they consistent with industry-accepted system hardening standards?
Sources of industry-accepted system hardening standards may include, but are not limited to, SysAdmin Audit Network Security (SANS) Institute, National Institute of Standards Technology (NIST), International Organization for Standardization (ISO), and Center for Internet Security (CIS).
(b) Are system configuration standards updated as new vulnerability issues are identified, as defined in Requirement 6.1? Y
(c) Are system configuration standards applied when new systems are configured? Y
(d) Do system configuration standards include all of the following:
* Changing of all vendor-supplied defaults and elimination of unnecessary default accounts?
* Implementing only one primary function per server to prevent functions that require different security levels from co-existing on the same server?
* Enabling only necessary services, protocols, daemons, etc., as required for the function of the system?
* Implementing additional security features for any required services, protocols or daemons that are considered to be insecure?
* Configuring system security parameters to prevent misuse?
* Removing all unnecessary functionality, such as scripts, drivers, features, subsystems, file systems, and unnecessary web servers?
2.2.1 (a) Is only one primary function implemented per server, to prevent functions that require different security levels from co-existing on the same server?
For example, web servers, database servers, and DNS should be implemented on separate servers.
(b) If virtualization technologies are used, is only one primary function implemented per virtual system component or device? Y
2.2.2 (a) Are only necessary services, protocols, daemons, etc. enabled as required for the function of the system (services and protocols not directly needed to perform the device's specified function are disabled)? Y
(b) Are all enabled insecure services, daemons, or protocols justified per documented configuration standards? Y
2.2.3 Are additional security features documented and implemented for any required services, protocols or daemons that are considered to be insecure?
For example, use secured technologies such as SSH, S-FTP, SSL or IPSec VPN to protect insecure services such as NetBIOS, file-sharing, Telnet, FTP, etc.
2.2.4 (a) Are system administrators and/or personnel that configure system components knowledgeable about common security parameter settings for those system components? Y
(b) Are common system security parameters settings included in the system configuration standards? Y
(c) Are security parameter settings set appropriately on system components? Y
2.2.5 (a) Has all unnecessary functionality--such as scripts, drivers, features, subsystems, file systems, and unnecessary web servers--been removed? Y
(b) Are enabled functions documented and do they support secure configuration? Y
(c) Is only documented functionality present on system components? Y
2.3 Is non-console administrative access encrypted as follows: Use technologies such as SSH, VPN, or SSL/TLS for web-based management and other non-console administrative access.
(a) Is all non-console administrative access encrypted with strong cryptography, and is a strong encryption method invoked before the administrator's password is requested? Y
(b) Are system services and parameter files configured to prevent the use of Telnet and other insecure remote login commands? Y
(c) Is administrator access to web-based management interfaces encrypted with strong cryptography? Y
(d) For the technology in use, is strong cryptography implemented according to industry best practice and/or vendor recommendations? Y
3.2 (c) ) Is sensitive authentication data deleted or rendered unrecoverable upon completion of the authorization process? Y
(d) Do all systems adhere to the following requirements regarding non-storage of sensitive authentication data after authorization (even if encrypted): Y
3.2.2 The card verification code or value (three-digit or four-digit number printed on the front or back of a payment card) is not stored after authorisation Y
3.2.3 The personal identification number (PIN) or the encrypted PIN block is not stored after authorization? Y
4.1 (a) Are strong cryptography and security protocols, such as SSL/TLS, SSH or IPSEC, used to safeguard sensitive cardholder data during transmission over open, public networks?
Examples of open, public networks include but are not limited to the Internet; wireless technologies, including 802.11 and Bluetooth; cellular technologies, for example, Global System for Mobile communications (GSM), Code division multiple access (CDMA); and General Packet Radio Service (GPRS).
(b) ) Are only trusted keys and/or certificates accepted? Y
(c) Are security protocols implemented to use only secure configurations, and to not support insecure versions or configurations? Y
(d) Is the proper encryption strength implemented for the encryption methodology in use (check vendor recommendations/best practices)? Y
(e) For SSL/TLS implementations, is SSL/TLS enabled whenever cardholder data is transmitted or received?
For example, for browser-based implementations: * "HTTPS" appears as the browser Universal Record Locator (URL) protocol, and
* Cardholder data is only requested if "HTTPS" appears as part of the URL.
4.2 (b) Are policies in place that state that unprotected PANs are not to be sent via end-user messaging technologies? Y
5.1 Is anti-virus software deployed on all systems commonly affected by malicious software? Y
5.1.1 Are anti-virus programs capable of detecting, removing, and protecting against all known types of malicious software (for example, viruses, Trojans, worms, spyware, adware, and rootkits)? Y
5.1.2 Are periodic evaluations performed to identify and evaluate evolving malware threats in order to confirm whether those systems considered to not be commonly affected by malicious software continue as such? Y
5.2 Are all anti-virus mechanisms maintained as follows:
(a) Are all anti-virus software and definitions kept current? Y
(b) Are automatic updates and periodic scans enabled and being performed? Y
(c) Are all anti-virus mechanisms generating audit logs, and are logs retained in accordance with PCI DSS Requirement 10.7? Y
5.3 Are all anti-virus mechanisms:
* Actively running?
* Unable to be disabled or altered by users?
Note: Anti-virus solutions may be temporarily disabled only if there is legitimate technical need, as authorized by management on a case-by-case basis. If anti-virus protection needs to be disabled for a specific purpose, it must be formally authorized. Additional security measures may also need to be implemented for the period of time during which anti-virus protection is not active.
6.1 Is there a process to identify security vulnerabilities, including the following:
* Using reputable outside sources for vulnerability information?
* Assigning a risk ranking to vulnerabilities that includes identification of all "high" risk and "critical" vulnerabilities?
Note: Risk rankings should be based on industry best practices as well as consideration of potential impact. For example, criteria for ranking vulnerabilities may include consideration of the CVSS base score and/or the classification by the vendor, and/or type of systems affected.
Methods for evaluating vulnerabilities and assigning risk ratings will vary based on an organization's environment and risk assessment strategy. Risk rankings should, at a minimum, identify all vulnerabilities considered to be a "high risk" to the environment. In addition to the risk ranking, vulnerabilities may be considered "critical" if they pose an imminent threat to the environment, impact critical systems, and/or would result in a potential compromise if not addressed. Examples of critical systems may include security systems, public-facing devices and systems, databases, and other systems
6.2 (a) Are all system components and software protected from known vulnerabilities by installing applicable vendor-supplied security patches? Y
(b) Are critical security patches installed within one month of release?
Note: Critical security patches should be identified according to the risk ranking process defined in Requirement 6.1.
6.4.5 (a) Are change-control procedures for implementing security patches and software modifications documented and require the following?
* Documentation of impact
* Documented change control approval by authorized parties
* Functionality testing to verify that the change does not adversely impact the security of the system
* Back-out procedures
(b) Are the following performed and documented for all changes: Y Documentation of impact? Y Documented approval by authorized parties? Y (a) Functionality testing to verify that the change does not adversely impact the security of the system? Y
(b) For custom code changes, testing of updates for compliance with PCI DSS Requirement 6.5 before being deployed into production? Y Back-out procedures? Y
6.5 (c) Are applications developed based on secure coding guidelines to protect applications from, at a minimum, the following vulnerabilities:
6.5.1 Do coding techniques address injection flaws, particularly SQL injection?
Note: Also consider OS Command Injection, LDAP and XPath injection flaws as well as other injection flaws.
6.5.2 Do coding techniques address buffer overflow vulnerabilities? Y
For web applications and application interfaces (internal or external), are applications developed based on secure coding guidelines to protect applications from the following additional vulnerabilities:
6.5.7 Do coding techniques address cross-site scripting (XSS) vulnerabilities? Y
6.5.8 Do coding techniques address improper access control such as insecure direct object references, failure to restrict URL access, directory traversal, and failure to restrict user access to functions? Y
6.5.9 Do coding techniques address cross-site request forgery (CSRF)? Y
6.5.10 Do coding techniques address broken authentication and session management?
Note: Requirement 6.5.10 is a best practice until June 30, 2015, after which it becomes a requirement.
6.6 For public-facing web applications, are new threats and vulnerabilities addressed on an ongoing basis, and are these applications protected against known attacks by applying either of the following methods?
* Reviewing public-facing web applications via manual or automated application vulnerability security assessment tools or methods, as follows:
- At least annually
- After any changes
- By an organization that specializes in application security
- That all vulnerabilities are corrected
- That the application is re-evaluated after the corrections
Note: This assessment is not the same as the vulnerability scans performed for Requirement 11.2.
- OR -
* Installing an automated technical solution that detects and prevents web-based attacks (for example, a web-application firewall) in front of public-facing web applications to continually check all traffic.
7.1 Is access to system components and cardholder data limited to only those individuals whose jobs require such access, as follows:
7.1.2 Is access to privileged user IDs restricted as follows: * To least privileges necessary to perform job
* Assigned only to roles that specifically require that privileged access?
7.1.3 Are access assigned based on individual personnel's job classification and function? Y
8.1.1 1.1 Are all users assigned a unique ID before allowing them to access system components or cardholder data? Y
8.1.3 Is access for any terminated users immediately deactivated or removed? Y
8.1.5 (a) Are accounts used by vendors to access, support, or maintain system components via remote access enabled only during the time period needed and disabled when not in use? Y
(b) Are vendor remote access accounts monitored when in use? Y
8.1.6 (a) Are repeated access attempts limited by locking out the user ID after no more than six attempts? Y
8.1.7 Once a user account is locked out, is the lockout duration set to a minimum of 30 minutes or until an administrator enables the user ID? Y
8.2 In addition to assigning a unique ID, is one or more of the following methods employed to authenticate all
* Something you know, such as a password or passphrase
* Something you have, such as a token device or smart card
* Something you are, such as a biometric
8.2.1 (a) Is strong cryptography used to render all authentication credentials (such as passwords/phrases) unreadable during transmission and storage on all system components? Y
8.2.3 (a) Are user password parameters configured to require passwords/passphrases meet the following?
* A minimum password length of at least seven characters
* Contain both numeric and alphabetic characters
Alternatively, the passwords/phrases must have complexity and strength at least equivalent to the parameters specified above.
8.2.4 (a) Are user passwords/passphrases changed at least every 90 days? Y
8.2.5 (a) Must an individual submit a new password/phrase that is different from any of the last four passwords/phrases he or she has used? Y
8.2.6 Are passwords/phrases set to a unique value for each user for first-time use and upon reset, and must each user change their password immediately after the first use? Y
8.3 Is two-factor authentication incorporated for remote network access originating from outside the network by personnel (including users and administrators) and all third parties (including vendor access for support or maintenance)?
Note: Two-factor authentication requires that two of the three authentication methods (see PCI DSS Requirement 8.2 for descriptions of authentication methods) be used for authentication. Using one factor twice (for example, using two separate passwords) is not considered two-factor authentication.
Examples of two-factor technologies include remote authentication and dial-in service (RADIUS) with tokens; terminal access controller access control system (TACACS) with tokens; and other technologies that facilitate two-factor authentication.
8.5 Are group, shared, or generic accounts, passwords, or other authentication methods prohibited as follows:
* Generic user IDs and accounts are disabled or removed;
* Shared user IDs for system administration activities and other critical functions do not exist; and
* Shared and generic user IDs are not used to administer any system components?
8.6 Where other authentication mechanisms are used (for example, physical or logical security tokens, smart cards, and certificates, etc.), is the use of these mechanisms assigned as follows?
* Authentication mechanisms must be assigned to an individual account and not shared among multiple accounts
* Physical and/or logical controls must be in place to ensure only the intended account can use that mechanism to gain access
9.1 Are appropriate facility entry controls in place to limit and monitor physical access to systems in the cardholder data environment? Y
9.5 Are all media physically secured (including but not limited to computers, removable electronic media, paper receipts, paper reports, and faxes)?
For purposes of Requirement 9, "media" refers to all paper and electronic media containing cardholder data.
Y (9.6) Y Y
9.6 (a) Is strict control maintained over the internal or external distribution of any kind of media? Y (9.7) Y Y
(b) Do controls include the following:
9.6.1 Is media classified so the sensitivity of the data can be determined? Y (9.7.1) Y Y
9.6.2 Is media sent by secured courier or other delivery method that can be accurately tracked? Y (9.7.2) Y Y
9.6.3 Is management approval obtained prior to moving the media (especially when media is distributed to individuals)? Y (9.8) Y Y
9.7 Is strict control maintained over the storage and accessibility of media? Y (9.9) Y Y
9.8 (a) Is all media destroyed when it is no longer needed for business or legal reasons? Y (9.10) Y Y
(c) Is media destruction performed as follows:
9.8.1 (a) Are hardcopy materials cross-cut shredded, incinerated, or pulped so that cardholder data cannot be reconstructed? Y (9.10.1a) Y Y
(b) Are storage containers used for materials that contain information to be destroyed secured to prevent access to the contents? Y (9.10.1b) Y Y
10.2 Are automated audit trails implemented for all system components to reconstruct the following events:
10.2.2 All actions taken by any individual with root or administrative privileges? Y
10.2.4 Invalid logical access attempts? Y
10.2.5 Use of and changes to identification and authentication mechanisms–including but not limited to creation of new accounts and elevation of privileges – and all changes, additions, or deletions to accounts with root or administrative privileges? Y
10.3 Are the following audit trail entries recorded for all system components for each event:
10.3.1 User identification? Y
10.3.2 Type of event? Y
10.3.3 Date and time? Y
10.3.4 Success or failure indication? Y
10.3.5 Origination of event? Y
10.3.6 Identity or name of affected data, system component, or resource? Y
10.5.4 Are logs for external-facing technologies (for example, wireless, firewalls, DNS, mail) written onto a secure, centralized, internal log server or media? Y
10.6 Are logs and security events for all system components reviewed to identify anomalies or suspicious activity as follows?
Note: Log harvesting, parsing, and alerting tools may be used to achieve compliance with Requirement 10.6.
10.6.1 (b) Are the following logs and security events reviewed at least daily, either manually or via log tools?
* All security events
* Logs of all system components that store process, or transmit CHD and/or SAD, or that could impact the security of CHD and/or SAD
* Logs of all critical system components
* Logs of all servers and system components that perform security functions (for example, firewalls, intrusion-detection systems/intrusion-prevention systems (IDS/IPS), authentication servers, e-commerce redirection servers, etc
10.6.2 (b) Are logs of all other system components periodically reviewed--either manually or via log tools--based on the organization's policies and risk management strategy? Y
10.6.3 (b) Is follow up to exceptions and anomalies identified during the review process performed? Y
10.7 (b) Are audit logs retained for at least one year? Y
(c) Are at least the last three months' logs immediately available for analysis? Y
11.2.2 (a) Are quarterly external vulnerability scans performed?
Note: Quarterly external vulnerability scans must be performed by an Approved Scanning Vendor (ASV), approved by the Payment Card Industry Security Standards Council (PCI SSC).
Refer to the ASV Program Guide published on the PCI SSC website for scan customer responsibilities, scan preparation, etc.
(b) Do external quarterly scan and rescan results satisfy the ASV Program Guide requirements for a passing scan (for example, no vulnerabilities rated 4.0 or higher by the CVSS, and no automatic failures)? Y
(c) Are quarterly external vulnerability scans performed by a PCI SSC Approved Scanning Vendor (ASV[)]? Y
11.2.3 (a) Are internal and external scans, and rescans as needed, performed after any significant change?
Note: Scans must be performed by qualified personnel.
(b) Does the scan process include rescans until: * For external scans, no vulnerabilities exist that
are scored 4.0 or higher by the CVSS;
* For internal scans, a passing result is obtained or all "high-risk" vulnerabilities as defined in PCI DSS Requirement 6.1 are resolved?
(c) Are scans performed by a qualified internal resource(s) or qualified external third party, and if applicable, does organizational independence of the tester exist (not required to be a QSA or ASV)? Y
11.3 Does the penetration-testing methodology include the following?
* Is based on industry-accepted penetration testing approaches (for example, NIST SP800-115)
* Includes coverage for the entire CDE perimeter and critical systems
* Includes testing from both inside and outside the network
* Includes testing to validate any segmentation and scope-reduction controls
* Defines application-layer penetration tests to include, at a minimum, the vulnerabilities listed in Requirement 6.5
* Defines network-layer penetration tests to include components that support network functions as well as operating systems
* Includes review and consideration of threats and vulnerabilities experienced in the last 12 months
* Specifies retention of penetration testing results and remediation activities results
11.3.1 (a) Is external penetration testing performed per the defined methodology, at least annually, and after any significant infrastructure or application changes to the environment (such as an operating system upgrade, a sub-network added to the environment, or an added web server)? Y
(b) Are tests performed by a qualified internal resource or qualified external third party, and if applicable, does organizational independence of the tester exist (not required to be a QSA or ASV)? Y
11.3.3 Are exploitable vulnerabilities found during penetration testing corrected, followed by repeated testing to verify the corrections? Y
11.3.4 (a) [If segmentation is used to isolate the CDE from other networks:] Are penetration-testing procedures defined to test all segmentation methods, to confirm they are operational and effective, and isolate all out-of-scope systems from in-scope systems? Y
(b) Does penetration testing to verify segmentation controls meet the following?
* Performed at least annually and after any changes to segmentation controls/methods
* Covers all segmentation controls/methods in use
* Verifies that segmentation methods are operational and effective, and isolate all out-of-scope systems from in-scope systems.
11.5 (a) Is a change-detection mechanism (for example, file integrity monitoring tools) deployed within the cardholder data environment to detect unauthorized modification of critical system files, configuration files, or content files?
Examples of files that should be monitored include: * System executables
* Application executables
* Configuration and parameter files
* Centrally stored, historical or archived, log, and audit files
* Additional critical files determined by entity (for example, through risk assessment or other means)
(b) Is the change-detection mechanism configured to alert personnel to unauthorized modification of critical system files, configuration files or content files, and do the tools perform critical file comparisons at least weekly?
Note: For change detection purposes, critical files are usually those that do not regularly change, but the modification of which could indicate a system compromise or risk of compromise. Change detection mechanisms such as file-integrity monitoring products usually come pre-configured with critical files for the related operating system. Other critical files, such as those for custom applications, must be evaluated and defined by the entity (that is the merchant or service provider).
11.5.1 Is a process in place to respond to any alerts generated by the change-detection solution? Y
12.1 Is a security policy established, published, maintained, and disseminated to all relevant personnel? Y
12.1.1 Is the security policy reviewed at least annually and updated when the environment changes? Y
12.4 Do security policy and procedures clearly define information security responsibilities for all personnel? Y
12.5 (b) Are the following information security management responsibilities formally assigned to an individual or team:
12.5.3 Establishing, documenting, and distributing security incident response and escalation procedures to ensure timely and effective handling of all situations? Y
12.6 (a) Is a formal security awareness program in place to make all personnel aware of the importance of cardholder data security? Y
12.8 Are policies and procedures maintained and implemented to manage service providers with whom cardholder data is shared, or that could affect the security of cardholder data, as follows:
12.8.1 Is a list of service providers maintained? Y Y Y
12.8.2 Is a written agreement maintained that includes an acknowledgement that the service providers are responsible for the security of cardholder data the service providers possess or otherwise store, process, or transmit on behalf of the customer, or to the extent that they could impact the security of the customer's cardholder data environment?
Note: The exact wording of an acknowledgement will depend on the agreement between the two parties, the details of the service being provided, and the responsibilities assigned to each party. The acknowledgement does not have to include the exact wording provided in this requirement.
12.8.3 Is there an established process for engaging service providers, including proper due diligence prior to engagement? Y Y Y
12.8.4 Is a program maintained to monitor service providers' PCI DSS compliance status at least annually? Y Y Y
12.8.5 Is information maintained about which PCI DSS requirements are managed by each service provider, and which are managed by the entity? Y Y
12.10.1 (a) Has an incident response plan been created to be implemented in the event of system breach? Y
(b) Does the plan address the following, at a minimum:
* Roles, responsibilities, and communication and contact strategies in the event of a compromise including notification of the payment brands, at a minimum? Y
* Specific incident response procedures? Y
* Business recovery and continuity procedures? Y
* Data backup processes? Y
* Analysis of legal requirements for reporting compromises? Y
* Coverage and responses of all critical system components? Y
* Reference or inclusion of incident response procedures from the payment brands? Y
Total Number of Questions 13 14 139

Well, sorry this page is so long. When I began I thought it was a useful idea, and once started I wanted to complete the list. It is useful to me if no-one else.

Posted on: 07 March 2014 at 15:24 hrs

Comments Comments (0) | Permalink | Send Send | Post to Twitter

29 January 2014

Privacy Notices and Supplier Contracts

Over Christmas I caught up with a backlog of news stories, tweets and bookmarked items. One relating to privacy notices surprised me, despite being quite an old item.

Photograph of a locked wooden door with an adjacent metal enclosure housing a keypad, video camera, microphone and loudspeaker - a sign on the door reads 'Keep locked shut' and another handmade sign reads 'Visitors - Please press buzzer and show ID to  the camera - thank you'

It seems Google's terms of service (UK version) for Google Analytics include certain privacy requirements on its users (web site operators).

The web post identifies obligations placed on web site operators:

  • Have a privacy policy
  • Abide by all applicable laws relating to the collection of information from visitors
  • State the usage of third party tracking and usage of cookies for tracking

There are additional requirements for users of AdWords and AdSense. A handy reminder that your suppliers can be the source of additional information security and privacy mandates.

After all, if you have an incident, you don't want to be found breaking contractual obligations as well.

Posted on: 29 January 2014 at 08:35 hrs

Comments Comments (0) | Permalink | Send Send | Post to Twitter

11 October 2013

Application-Layer Denial of Service Attacks

We often hear about infrastructure denial of service (DoS) attacks, but traditionally there has not been much data available on application-layer DoS.

One of the charts from the Q2 2013 DDos Attack Report

The Prolexic Distributed DoS (DDos) attack reports includes a comprehensive analysis of data from their own networks. Application-layer attacks against their clients accounted for 25% of attacks, with the remainder against infrastructure (OSI layers 3 and 4). For the infrastructure attacks, SYN floods accounted for almost half of all attacks (the report's text says 31.22 of infrastructure, but the data in Figure 3 suggests it is 31.22 of all attacks).

The number of attacks has risen by a third, but it is not clear whether this is due to the company having more clients, or because the were more attacks against each client. The points I found of most interest:

  • Compromised web servers are now the preferred method of attack, not a botnet of home PCs
  • An average attacks lasts less than two days
  • The average bandwidth is almost 50 Gbps, but half are less than 5Gbps, and a fifth are less than 1Gbps
  • GET floods account for the majority of application-layer DDoS attacks
  • Many low-volume attacks are easy to launch without significant skill
  • Amplification attacks where the attacker spoofs their identity to be that of the ultimate target and are sent to intermediary victim servers, are favoured due to the additional impact and source obfuscation.

The reports are free to download after registration.

See also previous posts on Denial of Service Attack Defences and Distributed Denial of Service Attacks.

Posted on: 11 October 2013 at 08:14 hrs

Comments Comments (0) | Permalink | Send Send | Post to Twitter

01 September 2013

Proactive Attack Detection

In late 2011 and 2012, the European Network and information Security Agency (ENISA) published the results of a workshop and study into the proactive detection of security incidents. It is useful to look back and see how the recommendations in those reports sit against current advances in application-layer attack detection.

Proactive detection of incidents is the process of discovery of malicious activity in a CERT's constituency through internal monitoring tools or external services that publish information about detected incidents, before the affected constituents become aware of the problem.

This approach concerns detecting and blocking attacks using information and tools, instead of the more usual identification and treatment of actual incidents. Two reports were published. The first report refers to using these external security feeds to detect attacks and the second report is a best practice document on honeypots including an analysis of the available solutions. Both reports contain very useful information for operational security teams. However, I think production application resources (both as information sources and tools) could have been highlighted further.

In the first report, the most application-centric services (5.2) and tools/mechanisms (5.3) evaluated for the proactive detection of "network" security incidents are Project Honeypot (5.2.25), Spamtrap (5.3.10), Web Application Firewalls (5.3.11), and Application Logs (5.3.12). The latter does not appear to mention all the possible useful data that could be recorded in well-designed custom application security logging. Application-specific attack detection and real-time response is not mentioned, not even in the discussion of IDS/IPS (5.3.5).

This type of attack information, would be a very rich source of data quality enrichment (8.1.3) since actual application mis-use, with a very high degree of confidence, could complement feeds from other sources. AppSensor-like capabilities can also assist with the automation of some data processing and correlation. Applications, which probably fall somewhere between the report's "advanced" and "upcoming" categories (6.3) of "must-have" tools and mechanisms. could also be used in a wider network of attack sensors (6.3.2). These concepts, and other benefits, are discussed in the soon-to-be issued new OWASP AppSensor Guide.

In the second ENISA report, HTTP proxies are mentioned as a useful part of web honeypot deployments (3.8). The evaluation of honeypots includes mention of High Interaction Honeypot Analysis Toolkit (HiHAT), which I am using in a web application honeypot comparison trial. In the taxonomy of honeypots "high-interaction" relates to a real resource rather than an emulation. AppSensor would appear to offer some capabilities against non-blind attacks against certain services (10.3).

The reports are still invaluable and information-rich. They provide evaluated sources of information to provide increate attack warning and self-detection capabilities. But hopefully in due course we can spread the word about application intrusion detection to these communities as well.

Posted on: 01 September 2013 at 21:13 hrs

Comments Comments (1) | Permalink | Send Send | Post to Twitter

19 April 2013

AppSensor at Security B-Sides London

Next week Dinis Cruz and I will be running an AppSensor workshop at Security B-Sides London 2013.

Photograph of a clock at the prime meridian in Greenwich looking towards central London and the banks at Canary Wharf

We will be demonstrating and helping attendees of the workshop specify, define and implement application-specific attack detection and real-time response. Our agenda is:

  • OWASP AppSensor concept
  • Attack detection exercise
  • Real world implementation
  • Alternative deployment models

We'll be using paper-based materials and real code demonstrations (in .Net, Java and PHP), so just bring your brains along. The workshop is being run from 14:00 to 15:30 hrs on Wednesday April 24th 2013 and can be booked on arrival at the event. It is available on a first come, first served basis. Security B-Sides London is a community-driven free event but requires registration, but due to overwhelming demand there is a waiting list.

We hope to see you there.

Posted on: 19 April 2013 at 08:41 hrs

Comments Comments (0) | Permalink | Send Send | Post to Twitter

26 February 2013

OWASP NL 13.03.13

I will be travelling to Nijmegen on Wednesday 13th March having been invited to speak at the OWASP Netherlands local chapter.

Photograph of three airport departure boards with one displaying the blue screen of death in contrast to the flight departures listed on the other two

At the meeting in the Radboud Universiteit Nijmegen, I will present two brand new talks.

  • "Record It!" — Do you know security event information should be recorded by an application? The presentation will outline which event properties are useful, what should be avoided and how logging can be implemented. In this short presentation, the benefits of good application logging will also be described. The content is drawn from the OWASP (Application Security) Logging Cheat Sheet
  • "OWASP Cornucopia" — Microsoft's Escalation of Privilege (EoP) threat modelling card game has been refreshed into a new version more suitable for common web applications, and aligned with OWASP advice and guides. The PCI DSS referenced OWASP Cornucopia - Ecommerce Web Application Edition will be presented and used to demonstrate how it can help developers identify security requirements from the OWASP Secure Coding Practices - Quick Reference Guide.

OWASP board member Jim Manico is also presenting on the subject of "Access Control Design Best Practices". Jim is a great speaker and I am looking forward to this.

The venue is the Beta-faculty, Huygensgebouw, at Heyendaalseweg 135, Nijmegen, Parkeergarage P11. Registration and pizza will occur from 18:30 hrs until 19:15 hrs when my first talk commences. The presentations will end at 21:00 hrs followed by a period for further networking. Registration is free but necessary.

Posted on: 26 February 2013 at 10:55 hrs

Comments Comments (0) | Permalink | Send Send | Post to Twitter

04 January 2013

Online Behavioural Advertising Rule Changes

The UK Code of Non-broadcast Advertising, Sales Promotion and Direct Marketing (CAP Code) will include new rules in a month's time (February 4th 2013) relating to greater transparency and choice for consumers around Online Behavioural Advertising (OBA).

Photograph of a hand-written notice taped to the pavement with the words 'Please mind the hole!!' written on it - there appears to be an uncovered inspection chamber below

The Committee of Advertising Practice (CAP) published the Online Behavioural Advertising Regulatory Statement in November 2012 describes how notices must be provided to web users, in or around online display advertisements, that they are undertaking OBA, together with a mechanism to opt out. These are based upon the pan-European industry-wide agreed self-regulatory standards — European Advertising Standards Alliance (EASA) Best Practice Recommendation and the IAB Europe Self-Regulation Framework.

The rules are defined in a new Appendix 3 of the CAP Code, and will be enforced by the Advertising Standards Authority. The rules will be reviewed again later in 2013.

Posted on: 04 January 2013 at 08:39 hrs

Comments Comments (0) | Permalink | Send Send | Post to Twitter

07 December 2012

Waffish Behaviour in 2012

In Scotland and northern England, a "waff" is a gust or puff of air, or a passing glimpse. It is also a verb meaning to flutter or cause to flutter. In this post I want to avoid hot air, waffle and waggish comments to highlight guidance on the deployment and use of web application firewalls (WAFs).

Crowd/queue control barriers

WAFs can be controversial in that they can be a blunt instrument to add some protection to web applications, may not be well understood, are often not configured well, can be expensive to acquire, require an ongoing resource commitment, may cause problems with valid business functionality, could lead to the delegation of responsibility for application security primarily to operations, and if not integrated with other software assurance activities, can lead to the mistaken assumption that applications are secure. These issues need to be considered, but WAFs are a valid tool to have in your arsenal of defences.

Some more recent, and older long-standing, viewpoints and uses are described in the sources listed in alphabetical order below:

If you have, or are thinking of using WAFs, do read all of the above and subsequent discussions about some of those papers, as well as listening to suppliers/vendors. Then make up your own mind.

Posted on: 07 December 2012 at 08:54 hrs

Comments Comments (0) | Permalink | Send Send | Post to Twitter

More Entries

Monitoring : Web Security, Usability and Design
ISO/IEC 18004:2006 QR code for http://clerkendweller.com

Page http://www.clerkendweller.com/monitoring
Requested by on Thursday, 24 April 2014 at 10:10 hrs (London date/time)

Please read our terms of use and obtain professional advice before undertaking any actions based on the opinions, suggestions and generic guidance presented here. Your organisation's situation will be unique and all practices and controls need to be assessed with consideration of your own business context.

Terms of use http://www.clerkendweller.com/page/terms
Privacy statement http://www.clerkendweller.com/page/privacy
© 2008-2014 clerkendweller.com