Sunday, October 16, 2016

This site has moved!

Set your blog site bookmarks to,  Please update any article bookmarks since those links have changed as well.

Monday, October 10, 2016

Top Security Expert, IoT Security is a Market Failure

Photo (click to enlarge)

In a recent blog post, Security Economics of the Internet of Things on Schneier on Security, security expert and cryptologist Bruce Schneier describes economics related to securing IoT devices.  The post was written due to unprecedented DDOS attacks against investigative security journalist Brian Krebs and his web site  Schneier describes an interesting situation in IoT security where neither the purchaser or seller has a business stake in security quality.  As a result, IoT security across industry is very weak or non-existent.  This is far different than the smart phone or computer markets where there is strong business interest, security patching, and devices are replaced every two to three years.  Schneier notes weak and sometimes non-existent IoT security creates an "externality", a sort of invisible pollution, impacting many individuals and businesses broadly.  So while purchaser and seller don't share a business interest in security quality other innocent parties may be harmed by those decisions which is like environmental pollution.  Schneier takes a strong stance describing IoT security as a market failure and that government involvement is the only way to correct failed markets.

Related posts
Security Sucks - Who's to Blame?
A Few Thoughts on Security as a Public Health Issue
Woodsy Owl 2016 - Don't Pollute Software!

Tuesday, October 4, 2016

Why Yahoo's Previous Security Chief Left for Facebook

There is seldom transparency around executive departures but this one is particularly interesting.

[Yahoo's Response] "Yahoo is a law abiding company, and complies with the laws of the United States," the company said in a brief statement in response to Reuters questions about the demand. Yahoo declined any further comment.

The original story, Exclusive: Yahoo secretly scanned customer emails for U.S. intelligence

Wednesday, September 28, 2016

OWASP WordPress Security Implementation Guide

An email came across the OWASP leaders list today about securing WordPress.  If your interested to strengthen your WordPress server there are some free and helpful tools you may not be aware that exist.

OWASP WordPress Security Implementation Guide
The OWASP guide describes security cross-domain techniques and tips for strengthening security on your WordPress servers.  The guide is not version specific so you should check to see if there are any version specific vulnerabilities you need to be aware of for your particular version.

WordPress Nuke
Project by Munir Njenga (OWASP Chapter Leader, Kenya) applies some the techniques described by the OWASP WordPress security guide and applies them to a plugin that you can install on your WordPress server.  The plugin is being tested with WP version 4.6.1 and work in progress.

WordPress is an amazing application for managing your blog.  WordPress packs some powerful extensibility features for integrating 3rd party tools.  There is also a lively community of developers working on these tools and there's virtually a plugin for almost anything you want to do.  Like many highly extensible and useful software products, WordPress is challenging to secure and my reason to post.

Monday, September 19, 2016

OWASP 2016 Board Election Interviews

Following are the linkings for OWASP's 2016 Board of Directors.  I'm running for the board this year so I have indexed each of the links to start at my response but feel free to listen to all the responses.

OWASP Podcast Interview Part 1 of 4, Developer Participation [Audio]
OWASP Podcast Interview Part 2 of 4, Vendor Neutrality [Audio]
OWASP Podcast Interview Part 3 of 4, Most Important Issues [Audio]
OWASP Podcast Interview Part 4 of 4, Members, Projects, Conferences, and Chapters [Audio]

Friday, September 16, 2016

View Into the World of Facebook Metadata

Updated on September 17, 2016

A research paper I found offers an interesting view into the world of Facebook metadata and why metadata is valuable but there's more.  The two researchers, one from FB, to be expected, but the other is from Carnegie Mellon University(CMU).  This is meaningless to a casual reader but CMU maintains a relationship and conducts security research for the U.S. Government.  At times this relationship has come under fire revealing interests in dark programs, "Why was the Black Hat talk on Tor de-anonymization mysteriously canceled?".  Of course, there is the possibility the relationship between the researchers on the FB research project may be entirely coincidental.  Many security professionals participate on projects with others across industry.  CMU also shares many positive security projects with the public and industry like their Secure Coding efforts.  Even so if we take circumstantial evidence at face value, the United States Government may have an interest in the Facebook posts/comments that users choose not to publish.

Monday, September 12, 2016

Presenting DeepViolet TLS/SSL at Black Hat Europe 2016

November 1-4, 2016 I am presenting on DeepViolet TLS/SSL at the Black Hat security conference event in London.  To learn more about DeepViolet TLS/SSL scanning API and tools check out the OWASP project landing page.  Or to see the session description on Black Hat's web site, DeepViolet TLS/SSL Scanner.

A few months ago I was presenting on another unrelated security project, OWASP Security Logging Project, at OWASP AppSec EU in Rome Italy.  International trips are expensive.  Many thanks to the generosity of my manager and my employer, Oracle!

Speed Development & Fun with OWASP JSON Sanitizer

A time saving tip occurred to me while working on a cloud security tools project and implementing the OWASP JSON Sanitizer.  The sanitizer does not differentiate between malformed JSON sent by attackers or those originating from developer error.  So it's helpful in both cases but let me explain.

The time saving point is that as your developing your application depending upon the tools you use to transform JSON it may be more or less easy to make mistakes.  Finding mistakes in your JSON is time consuming and detail oriented work.  JSON is a little easier to read than XML but it's little comfort with large or complex documents.  The sanitizer saves time since it corrects errant JSON making it well-formed.  I found this behavior useful during development to alert to problems during development and perhaps even post deployment.  Consider the following code fragment,

// Simple sanity checks before we call sanitizer
if( json == null || json.length() < 1 ) {
  throw new MyException("Missing request");
// OWASP JSON Sanitizer
String sanitizedJson = JsonSanitizer.sanitize(json);
if( !json.equals(sanitizedJson) ) {
  logger.error("RAW JSON, detail="+json); 
  logger.error("SANITIZED JSON, detail="+sanitizedJson);
  String msg = "Raw/Sanitized JSON not eq.  Attack or malformed JSON, see log.";
  throw new MyException(msg);


If there is a difference between raw and well-formed sanitizer JSON then it's likely, 1) your program has a bug (e.g., encoding, malformed), 2) an attacker is tampering with client JSON to exploit your parser.  Regardless of which case is true, you need to review the JSON to see what went wrong.  Once deployed, you can configure a log4j appender to send alerts so you can investigate offline.  I don't claim the technique is unique or innovative but it was unexpectedly helpful so I thought I would share the idea.

Wednesday, September 7, 2016

OWASP Dependency Check 1.4.3 Released

OWASP Dependency Check 1.4.3 released.  Following is the announcement from the OWASP Leader's List,

OWASP dependency check is a great tool to include in you CI automation suite.  Use dependency check to alert on known insecure libraries your developers are using and encourage moving to libraries with less known vulnerabilities.

Monday, July 18, 2016

DeepViolet TLS/SSL Java DAST Tool Added as OWASP Project

July 13, 2016 the DeepViolet TLS/SSL DAST tool became an OWASP incubator project.  I started this project some time back for my own purposes.  I always intended to share this code publicly but I seriously never considered it would be useful to anyone.  Mostly since such great like OpenSSL and Qualys already exist.  It became apparent after being contacted by interested developers and operational teams that there's still some room to contribute with a new tool in this space.   I petitioned OWASP to add DeepViolet as an OWASP project to increase visibility and attempt to build a team of like minded developers willing to invest in DeepViolet and build a tool we can all use.

So what can you do with DeepViolet?
A picture is worth a thousand words so here is a sample of some of the scanning output.
Photo 2: DeepViolet Desktop Application View

DeepViolet can also be run from the command line and included in your shell scripts.  A sample of the output looks like the following.

DeepViolet can also be included in your own projects as an API.  For more information about DeepViolet refer to the following information.

OWASP DeepViolet TLS/SSL Scanner Code Project, main OWASP project landing page.
DeepViolet GitHub Project Page, main landing page for GitHub project code/documentation.
DOWNLOAD, current release binaries.

Monday, July 4, 2016

OWASP Security Logging Project Presentation - Slide Deck

June 30, 2016 I provided a presentation, How to Use OWASP Security Logging, at AppSecEU 2016 in Rome, Italy.  I am following up to post the presentation slides.  For background about the project see my previous post, Presenting at OWASP AppSec EU Conference in Rome.

Thursday, June 23, 2016

Presenting at OWASP AppSec EU Conference in Rome

Updated on July 4, 2016

For a copy of the slide deck for this presentation see my follow-up post, OWASP Security Logging Project Presentation - Slide Deck.

Thursday June 30, 2016 4:15pm I am presenting a Lightning Training Session, How to Use OWASP Security Logging with August Detlefsen, Sytze van Koningsveld.  The training session will be a mixed format of presentation with hands-on lab exercises.

Attendees will learn about the OWASP Security Logging Project, background and why we need security logging, it's benefits, how to include it in new projects, upgrading your legacy projects, and much more.  In the session we cover each feature and answer audience questions.  Bring your laptop and participate in our exercises.  Learn first-hand how apply security logging to your projects.

So why would you be interested in our logging project?  A brief rundown on the benefits,

Diagnostics/Forensics, for problem determination is often useful to have a history of system state recorded in logs that you can refer to when their problems.  Security logging provides some features that log command line arguments, system environment variables, and Java system properties on startup.  Security logging also provides an interval logging feature to log key system and user specified metrics every 15-secs.  SIEM tools can be integrated to alert on memory problems, etc

Security Focus, door open/closed, user logged in/out, resource allocation, information classification of log messages, a desirable feature for government agencies or government contractors

Compliance, sign log messages, log messages remotely, discourage tampering

Automation Across Several Use-Cases,  the project provides automation benefits for standalone or desktop applications as well as up the application stack like Servlets/J2EE.  For example, in the application layer provide facilities to pull user id from the HTTPSession and insert it into log4j/logback Mapped Diagnostic Context(MDC) so that users can easily correlate ever log message with the current user that's logged into the system.

Support for Popular Platforms,  are you using Java logging, log4j, logj4 2, or logback?  If so, your ready to go since security logging is written to the SLF4J logging interface.

Large Base of Developer Knowledge,  security logging is compatible with populator loggers so you can get running quickly.

Legacy Support, security logging includes support to capture streams from your old console logging applications (e.g., System.out/System.err).  Alternatively, you may have old commercial code that logs to consoles where you don't have the source code.  In these use cases there are some benefits for intercepting these streams and redirecting them to security logging.  You will not realize the full benefits of native logging (e.g., logger inheritance); however, you still receive some ancillary benefits like remote logging, ability to mark messages with an information classification, etc.

There is a lot of cover with the platform.  Hope to see you in Rome at our session, seats are filling up fast, register quickly.  Usually OWASP provides the session content after the conference so if you can't attend you still have an opportunity to learn more about the platform.

Additional Resources
Wiki, OWASP Security Logging Project
Lightning Training Presentation, How to Use Security Logging Presentation
GitHub Project Site, OWASP Security Logging code

Tuesday, June 14, 2016

Blue Coat Intermediate CA Certificate Has Not Been Revoked

In a recent Internet security kerfuffle, Symantec issued the surveillance company Blue Coat Systems, a powerful digital certificate that allows them to masquerade as any secure business or financial institution by impersonating their web server.  See my original post for background, Blue Coat has Intermediate CA signed by Symantec.

In statement by Symantec the company notes, that companies often test with their own Intermediate CA.  While it's true companies test their PKI processes, it's very uncommon that Intermediate CA certificates in the test environment anchor to trusted roots in popular web browsers.  Any Intermediate CA certificate anchoring to trusted roots is by definition a - live production certificate.
Symantec goes on to note that certificates used in testing are "discarded" once tests are completed.  Unfortunately, this type of public communication is difficult to understand from a technical standpoint.  The standard practice to assure the public a certificate cannot be used is to revoke the certificate.  In the PKI system, a certificate that has been revoked provides scary warnings when users try to browse these web sites.  The assurance we desire is that the certificate is revoked.  Whether Blue Coat has the private key or not is immaterial.

To better understand the communication from Symantec, I checked the Blue Coat CA revocation status.  The result is that the Blue Coat CA certificate has not been revoked.  While there is no evidence of inappropriate use, nothing about this incident in the way it's explained or handled is considered industry best practice or even normal practice.  This is not the first time Symantec's processes around certificate management have been called to question by security researchers, The Case of the Symantec's Mysterious Digital Certificates.

You can test the Blue Coat CA certificate revocation status yourself with the following procedure.

Step 1 - Download Blue Coat CA Certificate
Download the Bluecoat CA Certificate to your computer.

Step 2 - Extract CRL host from Bluecoat Certificate
I'm using a work in progress tool I wrote, DeepViolet, to read the certificate but openssl is a well established alternative available on many operating systems.  If your using openssl you can view the certificate with the following, openssl x509 -in bluecoat-cert.crt -text -noout

java -jar dvCMD.jar -rc ../Downloads/bluecoat-cert.crt
Starting headless via dvCMD
Trusted State=>>>UNKNOWN<<<
Validity Check=VALID, certificate valid between Wed Sep 23 17:00:00 PDT 2015 and Tue Sep 23 16:59:59 PDT 2025
SubjectDN=CN=Blue Coat Public Services Intermediate CA, OU=Symantec Trust Network, O="Blue Coat Systems, Inc.", C=US
IssuerDN=CN=VeriSign Class 3 Public Primary Certification Authority - G5, OU="(c) 2006 VeriSign, Inc. - For authorized use only", OU=VeriSign Trust Network, O="VeriSign, Inc.", C=US
Serial Number=108181804054094574072020273520983757507
Signature Algorithm=SHA256withRSA
Signature Algorithm OID=1.2.840.113549.1.1.11
Certificate Version =3
Non-critical OIDs
CertificatePolicies=[ the event that the BlueCoat CPS and Symantec CPS conflict, the Symantec CPS governs. the event that the BlueCoat CPS and Symantec CPS conflict, the Symantec CPS governs.]
ExtendedKeyUsages=[serverauth clientauth]
SubjectAlternativeName=[[[, SymantecPKI-2-214]]]
Critical OIDs
KeyUsage=[nonrepudiation keyencipherment]

Processing complete, execution(ms)=784

Step 4 - Download CRL 
Download the certificate revocation list from the server specified in the certificate.

wget -O bluecoat-symcb-crl.der

Step 3 - Display CRL
Now that we have the certificate revocation list we can view the list of certificates revoked.  Apparently there are no revoked certificates.

openssl crl -inform DER -text -in bluecoat-symcb-crl.der
Certificate Revocation List (CRL):
        Version 1 (0x0)
        Signature Algorithm: sha1WithRSAEncryption
        Issuer: /C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=(c) 2006 VeriSign, Inc. - For authorized use only/CN=VeriSign Class 3 Public Primary Certification Authority - G5
        Last Update: Mar 22 00:00:00 2016 GMT
        Next Update: Jun 30 23:59:59 2016 GMT
No Revoked Certificates.
    Signature Algorithm: sha1WithRSAEncryption
-----BEGIN X509 CRL-----

-----END X509 CRL-----

Thursday, May 26, 2016

BlueCoat has Intermediate CA signed by Symantec

Updated June 12, 2016

A digital certificate was created by Symantec for Blue Coat Systems Inc.  The digital certificate is a special type of certificate that allows Blue Coat to operate as a trusted Certificate Authority(CA).  The certificate allows Blue Coat to create new digital certificates for use on highly trusted web sites like those used in banking and health care.

Most people and businesses operating servers on the Internet make every effort to provide the public with the safest and most secure online experience.  But the Internet is a big place and not everyone plays by the rules.  Providing a trusted Internet environment is essential for commerce and collaboration.  The system that manages Internet trust is Public Key Infrastructure(PKI).  PKI is the the security technology and processes that web browsers and web servers use for all highly trusted activities like online banking and health.  Certificate Authorities(CA) play a special role in PKI as the gatekeepers of secure servers on the Internet.  CA duties include managing applications for secure web servers.  To fulfill this special and important role, CA's must submit to stringent audits of their business practices and operations.  During normal day-to-day operations CA's must preserve public trust in online security by denying criminals access to masquerade as legitimate businesses or trusted partners.  Most often everything goes as planned but what about the case when CA's don't follow the rules.  Abuses may include issuing certificates without knowledge or consent of rightful domain owners, servicing unlawful or warrantless government requests, and much more.

Why is this incident important to me?
In May 2016 a security researcher, Filippo Valsorda, discovered an Intermediary CA X.509 digital certificate was issued to Blue Coat Systems by Symantec.  This is a concern for two reasons, 1) Blue Coat Systems manufactures hardware designed for surveillance, 2) the Intermediary CA certificate facilitates the issuance of highly trusted certificates in any Internet domain name.  For example, a Blue Coat device armed with their new CA certificate can surveil HTTPS web sites in a way that's difficult for web browser users to detect.

Why is the Blue Coat Systems CA a problem?
Trust is essential to the continued operation of the Internet.  Without trust, the full potential of the Internet will never be realized.  Few would want to purchase products, view medical laboratory results, exchange ideas with business partners, or email friends and family if our information can be surveilled, intercepted, and manipulated at any point without our full knowledge and consent.  The key displayed in your web browser in a secure HTTPS connection is an icon of trust.  If it's visible, we must have confidence the site we are communicating to is authentic and our communications confidential.

What does Bluecoat and Symantec have to say? 
Symantec has said that it's determined the CA certificate issued to Blue Coat was done so appropriately and that Blue Coat never had access to it.  This statement is designed to assuage public concern since it would prevent impropriety on Blue Coast behalf.  Unfortunately there is no easy way for the public to verify this statement.
Issuing a CA certificate to a surveillance company is by no means normal and concern by the security research community and anyone using a web browser is warranted.  Trust and confidence when issuing CA's is the single most important duty entrusted to Symantec in responsibility as an issuing authority.

What is the appropriate course of action for you?
It depends upon you.  If you trust that Symantec and Blue Coat are operating in your best interest then do nothing.  If on the other hand you consider Blue Coat's CA a potential vector for abuse then you can untrust the Blue Coat CA certificate.

To mark the BlueCoat CA certificate untrusted
1) Download BC CA Cert
2) Mark untrusted, OSX users | Windows users
* Mobile users: iPhone, I don't believe Apple exposes any trust management features to the public.  Android, unsure.

Original security researcher comments

More information
The Register, Blue Coat, Skype and QQ named despots' best friends
Blue Coat Systems, Blue Coat Intermediate CA
Symantec,  Symantec Protocol Keeps Private Keys In Its Control

Thursday, May 19, 2016

Hacking 101 by Phineas Fisher

Updated May 25 2016
I located another copy of the video on the Internet,

Updated May 22, 2016
I noticed Youtube removed Phineas Fisher's video.  The reason listed, "This video has been removed for violating YouTube's policy on spam, deceptive practices, and scams".  I watched the video.  There was no spam, deceptive practices, or scams.  The material was somewhat embarrassing for the Catalan Police Union.  Even so, there's no short supply of inflammatory and embarrassing videos on Youtube; especially ones involving government officials.  It's difficult to understand why this particular video received extraordinary attention.

Instructional video by Phineas Fisher demonstrating his hack of the Catalan Police Union in 39 minutes.  Anything that could go wrong for the Police did go wrong but here's the short-list.

1) Police using Wordpress, Wordpress is amazing blog software but it has a long history of security problems.  Wordpress provides a very rich extensibility framework of plugins written by almost anyone.  These plugins extend many desirable features to Wordpress but there is little to no quality control over these plugins and it's vulnerability Disneyland for bad guys.  Wordpress is great for running your personal blog but probably not the best choice if your a big target like a government agency (or security professional).

Photo1: Click to Expand

2) Applications DB Account Running w/MySQL Administrative Privileges, best practice is that the DB account used by the application run with the lowest privileges possible while still meeting the needs of the application.  In this case, application designers were unaware or lazy and used an account with administrative privileges.

3) Twitter Password for Police Same as Wordpress Account, once the attacker had the Wordpress password he was able to sign into Twitter and deface the Police department's Twitter account.  Best practices is not to use the same account across different web applications.  If you are going to bend this rule then at least don't use your shared password across sites you think could be hacked, sites that place less emphasis on security, etc.  For example, don't use the same password you use with your Facebook or Google password with smaller, less known sites, sites that may invest less into security.  At least your cutting your risk with this approach.

Wednesday, May 18, 2016

Open Source DeepViolet SSL/TLS Scanning Tool Updated

DeepViolet(DV) open source TLS/SSL DAST tool updated to Beta 4.  The major improvement for Beta 4 is the addition of an API so Java designers can implement DV features in their own projects.

Following are a summary of improvements for Beta 4.

  • Added API support for those who want to use DeepViolet features in their own Java projects. See package com.mps.deepviolet.api
  • Added samples package with sample code to demonstrate new API
  • Refactored existing code for the command line support and UI to use the new API.
  • 2 new command line options for debugging added, -d and -d2. d turns on Java SSL/TLS debugging. -d2 assigns DV debug logging priority.
  • Generated JavaDocs for Public APIs, see
  • javadoc.xml added to generate JavaDocs
  • Support for dock icon on OSX for the UI

To learn more about the DeepViolet refer to the projects GitHub page or click DOWNLOAD to try DeepViolet now.

Monday, May 2, 2016

2016 Stanford University Security Forum

Throughout the week of April 11th, 2016 Stanford held is annual affiliates Computer Forum on the campus.  Participation in the forum is available to affiliate members.  If your interested to be an affiliate send a note to me, see About page.   Stanford security forum is a great place to unplug from the day-to-day business and consider broader security challenges.  The campus is beautiful and the projects are interesting.  Attending the forum is always uplifting, I usually meet leaders from industry I know, university staff, and I always learn something new from their research.

The forum is a week long but attendees can sign up for individual days depending up interests.  I attended 2 days of the week long forum.  Monday was dedicated to security.  Thursday was dedicated to IoT.  Research projects and themes change from year to year.  This year cryptography and IoT where the broad themes.  Full media from the week long forum trails the post.

A Few Thoughts or Impressions
Following are some of the more important points I learned or points that captured my interests, not in any particular order of importance.

Why are quantum computers fast?
Traditional computers process information in bits.  A bit is either "on" or "off", a 1 or a 0 respectively but quantum computers also provide an Amplitude property associated with each quantum bit.  Remember Schrödinger's Cat?  The cat was in a Superposition of States where the cat is both alive and dead.  Amplitude is the measurement of the superposition which is the probability the cat is in one state or the other.  A point of some utility is that amplitude is not a simple percentage but instead is a complex number.  The the value combined with the amplitude of the bit form a quantum computational unit known as the Qubit.  In a traditional computer, increasing the number of bits increases the computers word size and address space which increases the processing power in polynomial time.  Increasing the number of qubits in a quantum computer increases processing power in exponential time.  Unlike a traditional computer, doubling the size of a quantum more than doubles computational power.  The increase in computational power is due to two major factors, 1) unique superposition properties of the qubit, 2) higher dimensional algorithms applicable specific problem spaces.  Quantum computers provide a different operational computing model when compared to a traditional computer.  Rather than serialized approach to computing using logic gates, lasers and radio waves interfere with each other and operate across many qubits simultaneously.  In some qubits, interference is constructive and in others interference is destructive.  The design of the quantum computer and algorithms seek to reinforce constructive interference patterns that produce the desired results.  I realize this answer is not satisfactory for everyone.  Take a look at the presentation materials in the links at the of the post.  Also take a look at, The Limits of Quantum article.

Quantum computers not likely to replace traditional computer
Quantum computers are fast at solving specific problems where an algorithm exists.  Quantum computers are not necessarily fast at solving all problems.  It's unlikely a quantum computer will replace your desktop; however, if a quantum computer could be made small enough it could make an addition to your desktop for specialized functions (e.g., 3D graphics).

Implications for web browser security
A quantum algorithm exists for finding large prime numbers, Shore's Algorithm.  Web browser security is predicated on the fact that large prime numbers are difficult to factor.  A quantum computer along with Shore's Algorithm can factor primes fast.  However, the state of the art in quantum computers today is about 9-qubits.  According to Professor Dan Boneh, we don't need to be concerned about quantum computers cracking browser security until quantum computers reach around 100-qubits.

Browser security in a post-quantum computing world
Professor Boneh elaborated, post-quantum computing encryption algorithms remain an area of interest.  Algorithms that are useful in a post-quantum world favor smaller primes within higher dimensional number spaces(>1024).  A research paper, Post-Quantum Key Exchange - A New Hope provides details.

TLS-RAR for auditing/monitoring SSL/TLS connections
A new protocol has been developed to monitor SSL/TLS.  TLS-RAR does not require terminating the SSL/TLS connection and establishing a new connection to the end-point.  Instead TLS-RAR works by dividing TLS connections into multiple epochs.  As a new epoch is established, between client and server, a new TLS session key is negotiated.  Meanwhile, the TLS session key for old epochs is provided to the observer which may be an auditor or monitoring tool.  In this way the observer has access to view old TLS epoch information.  The observer cannot view or alter information from the current epoch.  Data integrity and confidentiality between client and server is maintained.  Some of the advantages, no changes to the client are required(no new roots to add), and support for current TLS/SSL libraries.  This means TLS-RAR is compatible with a host of IoT technologies and components already deployed.

Session Media from the Forum
The following links provide access to session materials throughout the form.

Wednesday, April 27, 2016

DeepViolet SSL/TLS Scanning Tool Updated

Updated on April 29, 2016

DeepViolet updated to Beta2.  A number of bugs have been fixed and new features added.  The tool can be run from the command line or alternatively as a desktop GUI application.  Refer to the GitHub DeepViolet documentation for more detail.  Following is an overview of the command line options for quick reference.

usage: java -jar dvCMD.jar -serverurl [-wc | -rc ]
            [-h -s{t|h|r|c|i|s|n}]
            Ex: dvCMD.jar -serverurl -sections ts
            Where sections are the following,
            ;t=header section, h=host section, r=http response section,
            c=connection characteristics section, i=ciphersuite section,
            s=server certificate section, n=certificate chain section
   -h,--help Optional, print dvCMD help options.
   -rc,--readcertificate Optional, read PEM encoded certificate
            from disk. Ex: -rc ~/certs/mycert.pem
   -s,--sections Optional, unspecified prints all section
            or specify sections. [t|h|r|c|i|s|n]
   -u,--serverurl Required for all options except
            -readcertificate, HTTPS server URL to
   -wc,--writecertificate Optional, write PEM encoded certificate to
            disk. Ex: -wc ~/certs/mycert.pem

Or alternative use the desktop application.

Photo 1: DeepViolet Desktop Application

Friday, April 22, 2016

Woodsy Owl 2016 - Don't Pollute Software!

It's been 6-years David Rice's presentation and 4-years since my related blog post.  I can safely assume it had some impact on me.  I'm not sure if pollution or health care is the better metaphor for security but clearly national action is needed.  It's interesting to me society could mustered the interest and investment to improve national sentiment around pollution.  Software security is no less of a challenge.  I'm confident such an effort will develop around software security someday.  There's no way society can continue the present course of increasing size and scope of national security incidents while continuing to shrug them off.  Someday the level of pain, suffering, and public outcry will force action.

Tuesday, April 19, 2016

Weaknesses with Short-URLs

Recent research was presented[1] raising security and privacy concerns around URL shortening services like,, and others.  The services are used to shorten lengthy URL's to more compatible URLs suitable for online use.  Smaller URLs also provide ancillary benefit since they are easier to remember.  My first impression was the recent research[1] on URL shorteners was that it was specious since URL shortening was never intended or designed as a security and privacy control from the start.  Reading the research softened my initial opinion.  The seeming randomness of these short URLs provides the public unfounded confidence of their utility for security.  Specifically, the false idea that others will not discover the link since it appears secure - difficult to guess.  Unfortunately, the part of the URI providing the identity for the long URL, is as few a 6-characters for some shortening services, far too small a space to be cryptographically secure, and easily brute forceable by attackers and demonstrated by researchers.

The research paper was not the first cracks in the short URL armor.  The following presents some concerns I gathered across different resources from other researchers.  I also share some personal thoughts about short URL weaknesses that I have not noticed elsewhere.  I don't stake any claim to these and I'm simply passing them along to raise awareness.  I'm betting we have not seen the last around security and privacy concerns with short URLs.

1) Short URLs not secure
As researchers mention[1] these links are not secure and easily brute forced.  This may or may not be a concern for you depending on how you use them.

2) Short URLs target host unknown until clicked
Phishing is a problem for everyone.  Short URLs exacerbate an already bad email phishing problem.  There are some services like where email users can unwind these URLs but most people will never do this.  People are trusting and verification takes extra work.  Clicking a shortened URL is like hitchhiking in a strangers car, you don't know where it's taking you.

3) Obfuscated redirects
Brian Krebs makes an interesting point[3], attackers can leverage an open redirect on a government host and create a short branded URL.  The result is an authentic URL that looks like it navigates to a government web site but instead navigates to the attackers malware site.

This URL

Becomes this branded URL (notice the .gov domain, ouch!)

The combination of an open redirect and short URL branding creates a situation of misplaced trust or false sense of security.  Users think clicking will take them to a government site when if fact it takes them to another site entirely.  The moral of the tale, if you have any open redirects in your web site your in trouble but if you also use branded URL shorteners your setting the public up for malware and phishing attacks.

4) Obfuscate payloads
A spin on Krebs idea I considered is that any arbitrary payload can be saved in a long URL by attackers and hidden from prying eyes - a payload.  For example, on some services it's possible to create arbitrary URLs with invalid hosts and parameters so long as those URLs are syntactically correct.  Meaning if I create a URL some shortening services are not checking to ensure host xyz is a valid host.  Even if the host is valid, URI parameters may be developed that legitimate hosts ignore entirely like the following,,b=c.  Some servers like Blogger ignore superfluous parameters like a=b,b=c in the request if you pass them.  Attackers can create any URL they want.  I used the following in a quick test, (x10,000 zeros, for a 10k URL)

I created a bogus URL with a 10KB URI that consisted of a slash (/) followed by 10,000 zeros and was successful.  Attackers can store payloads in these bogus URLs to use for a variety of purposes.  Outside of validating the syntax and host, shortening services have no idea of knowing if these URIs are valid and, in their defense, there's probably not a good way for them to validate.  Therefore, they must store the entire long URL.  This means an attacker can use URL shortening services, to hide small chunks of arbitrary data for nefarious purposes like command and control for bot networks, torrent information, etc.  URL shortening sites undoubtably provide security intrusion and content controls.  There's likely some limits in size or number of URL per second they will accept, etc.  I'm not sure what they are but it's likely they vary between shortening services.

5) Multiple indirection
Some of the URL shorting services will not accept their own URLs for a long URL but at least a few of them will accept shorted URLs of other services.  Therefore it's possible to create multiple levels of indirection. short URLs referring to other short URLs.  How many levels can be created?  I'm not sure.  It seems like browsers must have some practical level of redirect control but I have no idea.  I'm not sure if this serves a practical purpose yet but at the very least it complicates organizational IT forensics.

6) Infinite loops
I was wondering if I could create two or more short URLs referring to each other.  To get this to working requires an understanding of the shortening algorithm such that the attacker can determine the shortened URI before it's created.  Or perhaps a shortening services that allows changing a long URL after the short URL has been created.  This will allow an attacker to create short URLs that either directly or indirectly refer to each other.  I didn't spend much time looking at this.  I tried to find some code online to see if there were any standard algorithms.  I was thinking everyone may be leveraging an open source project so I could determine the algorithm easily.  Nothing was obvious, I was not successful.  Perhaps someone else may want to take this up.  I'm not sure if the browser is smart enough to detect these types of infinite redirects or not.  If not, it seems plausible it could be used to hang or crash the browser.  Even if possible, I'm not sure this has any practical value for attackers anyway.

7) XSS in URLs
I tried to see if I could get JavaScript inside a long URL and then shorten it to bypass browser security controls.  No success.  I tried using the javascript URI scheme type.  Some URL shorteners allowed it but at least Chrome and Safari were smart enough to handle the redirects as a html scheme type regardless of the scheme type I provided.  I also tried the data scheme type with no positive result.  Data works when pasted directly into the browser URL bar but not successful as a redirect.  Again handled like html scheme type regardless of the specified scheme.  Browsers are a battle hardened environment, good news for us.

8) Shortener Service Unavailability
If the shortening services goes away temporarily or permanently it impacts the services anywhere shortened links are embedded.  What happens to Twitter if goes away?  Not good.  DDOSing is essentially the same as DDOSing Twitter since a better part of Twitters content would be unreachable for users if cannot respond. has a big list of shortening services[2]. also tracks shortening services no longer available and there's many more of them I was aware.  If shortening is part of your business strategy, or your users are using it, you may want to consider all your available options and weight risks, reliable services, hosting your own, etc.

Keep in mind my tests were not comprehensive and exhaustive.  I didn't want to do anything that could be considered offensive.  So if noted a test was successful it may not be successful across all services.  Conversely if a test was unsuccessful if may not be unsuccessful everywhere.  An important consideration, while there are some problems with URL shorteners there's not a good immediate option for avoiding them.  If your going to participate in social media your going to be using short URLs like it or not until improvements are made.

[1] Gone in Six Characters: Short URLs Considered Harmful for Cloud Services
[2] Bit,do list of URL Shorteners
[3] Spammers Abusing Trust in US .Gov Domains

* Landminds image from World Nomads

Friday, April 8, 2016

Funniest Security/Privacy Tweet of 2016

Soghoian is referring to a piece of tape FBI Director Comey places over his laptop camera.  The subtle message for the public is that electronic privacy is for the privileged elite.


Photo: click to enlarge

I see a lot of companies without top security leadership representation, CISO's.  Check out a few company leadership pages sometime.  The point is that with no application security expert in the board room don't expect security concerns to be raised until your next public security incident.  Keep in mind the job of the CISO is not scape goat for your next public security incident; we are way past that now, it's to identify and reduce business risks/injury posed by technology products/services to acceptable levels.  Two points, 1) you need a CISO, 2) hire a knowledgeable CISO if you like your executive job or board position.

A couple of cases that could have been avoided or gone much better with a knowledgeable CISO...  The Matter of LabMD, Inc.  Target CEO Fired - Can You Be Fired If Your Company Is Hacked?

*Photo from Transformers film, 2007

Thursday, April 7, 2016

Vulgar Furry Ramblings

ARS: Nation-wide radio station hack airs hours of vulgar “furry sex” ramblings

The article goes on to conclude, "...advisory suggests that users should change passwords to the Web interface.."  No zero days or exotic hacks, only attackers doing their homework.  There's a strong possibility KIFT could have avoided the entire mess if they changed default factory credentials.

Application Security and Privacy One Year Ago

Some security gems from around April 2015.

Last Week Tonight with John Oliver: Government Surveillance (HBO)

Application Security Meme

Tuesday, April 5, 2016

Fortune Top-100 CISO's Not Well Equipped to Defend Software

Updated on April 16, 2016

To understand why online systems are plagued with seemly endless security incidents requires a closer look into today's security landscape.  Let's look first to understand the vulnerable systems criminals exploit.  Top security company WhiteHat says it best on their home page.
Photo 1: Except home page (click to enlarge)
According to WhiteHat web applications are the greatest risk area.  Next WhiteHat says, "...most security budgets are spent on securing and monitoring the perimeter and endpoints".   According to the FBI 2014 Internet Crime Report, "...IC3 received 269,422 complaints with an adjusted dollar loss of $800,492,073...", keep in mind this is US losses, not global.
"...IC3 received 269,422 complaints with an adjusted dollar loss of $800,492,0731...", FBI 2014 Internet Crime Report
Aside from the claims and statistics, it does not take a security expert to understand the global force behind the online movement.  Virtually every product and service is moving online and it stands to reason the criminals and crime are following the money.

Let's change gears, let's look into background on today's top security executive the, Chief Information Security Officer (CISO).  The following is Digital Guardian[INFOGRAPHIC] infographic for Fortunes 100's top CISO's.

Photo 2: Infographic DigitalGuardian web site
The infographic tells us CISO's are predominately male, well educated, hold various security and audit certifications.  In short, nothing particularly remarkable outside of our expectations but take a look at the following, 59% of CISO's have IT work background with only 13% in programming/engineering experience.
Fortune 100 CISOs are not well equipped with the skills necessary to defend today's vulnerable web applications
Makes sense, for years IT leaders have been successfully defending permitters with firewalls.  In all fairness, firewalls will always be valuable but they have not proven as effective defending online applications as well as IT infrastructure.  Indications are Fortune 100 CISOs are not well equipped with the skills necessary to defend today's vulnerable web applications.  Let's look at some of the reasons why.

Writing software code, software architecture, debugging, understanding the battery of tools, is an entire domain of expertise.  Can programming be learned like any other challenge?  Of course, but let's give programmers some credit, application development is an entire domain of knowledge and takes takes years to master.  Once that domain is mastered, learning to think like an attacker, breaking systems, secure coding techniques, secure coding libraries, dynamic and static analysis security tools are, in all fairness, is an entire new domain of expertise to master and not taught in most universities.  A top defender of software and secure software designer is a unique skill set.  This is why those that break into systems (e.g., pentesters) or secure traditional IT infrastructure don't necessarily make the best application defenders.

Attacks occur where you least expect them and it's often frustrating to newcomers in the application security profession

To give some idea of the learning challenges, learning basic programming principles like writing a "Hello World" program in Java will take about 10 minutes of time.  Learning object oriented design techniques principles, some months.  Learning the various Apache and open source packages you need to be competitive in a business environment can take years.  Understanding how to defend all that technology takes years of working through incidents, developing the security mindset, understanding the tools and techniques.  A strong technical leader requires mastery of two domains, software development and security.  If you wanted a leader for security engineering this is all you would need but you don't, you want a CISO.  Now you need someone who also knows how to frame security challenges to smart executives and board members that may not be very technical.  Strong CISO are rare individuals in high demand.

Photo:  ThreatTrack Security (click enlarge)
Today security is largely a software quality problem that can't be addressed with the next vendor security-in-box-solution.  Software security is a business and engineering quality problem - not an act of God.  Software code must be designed, built, and delivered securely.  Each step in the software development process, inception, architecture, development, testing, deployment, sunsetting, is important in the overall solution quality and historically entirely within the domain of software engineering groups.  Let's face it, software engineering leaders don't necessarily appreciate security advice around how to build systems.  Especially when the suggested security quality improvements reduce execution tempo which is closely related to performance based compensation.

Today security is largely a software quality problem that can't be addressed with the next vendor security-in-box-solution.  Software code must be designed, built, and delivered securely

Significantly reducing business risk depends on the CISO's ability to influence and win the support of software developers, development leaders, business executives, and board members.  Even a CISO with the best background and skills may not be able to influence positive code quality security improvements.  A CISO is not an army of one.  A knowledgeable CISO will fail without the proper support across business constituencies.  This is because security is everyone's job, not only the job of the CISO and their staff.   Influencing systemic positive change throughout an organization is difficult but it begins with role dependent education.  Today's CISO's must be as comfortable reviewing and recommending security architecture to a developer on the whiteboard as explaining business implications of security vulnerability to corporate boards.  CISO's must explain why engineering quality processes must be improved and recommend specific improvements when requested.  CISO's with best blend of technology and business experience have the best chance for improving software code quality and influencing the most positive changes to security and winning respect of developers.

As our most valuable assets are brought online as Internet web applications, criminals abscond with our data while companies are busy tweaking firewalls.  Many companies are squandering security investments prodigiously in the wrong areas.  Indications are Fortune Top-100 CISO's don't have the best blend of skills and experience to defend software systems - the primary weakness.

The trend is that all executives share security responsibility in a significant security incident so the value of a knowledgeable security executive should not be underestimated

The best CISO defenders of tomorrow will be those with experience coding/programming, designing, shipping software products and services.  If a security leader with a development background is not available - build one.  Find a top engineering leader and begin building the security mindset.  Send them to security conferences where executives congregate like, Gartner IT Security Summit.  Understanding business implications of security, executive concerns around security, and how to communicate with executives are essential.  Send them to SANS Institute to learn how to break software applications.  Theory is helpful but hands on skills are essential.  Attend security conferences like Blackhat, DEFCON, and others.  It can take years to find the best leader and build out a team.  Begin now, by investing in your own organization and growing some organic talent.  The trend is that all executives share security responsibility in a significant security incident so the value of a knowledgeable security executive should not be underestimated.

Share It!