Friday, September 25, 2020

CircleID shilling others' stuff?

It's been years since I've posted one of my opiniion pieces, but this one annoyed me enough to write about it. On 22 September, CircleID posted "100K+ List of Disposable Email Domains Under Security Analysis". I dislike the post as it is (in a technical sense) a poorly written/researched piece. A more accurate title would use "Marketing" instead of "Security".

Issues that I have with the "article" follow. Note: I use "article" in place of "ad" because, as an advertisement, the "article" is even more of a disappointment.

1) CircleID notes that it's a sponsored post. This means that someone is trying to sell/promote something. A minor bit of research will reveal that the "author" of the article is willing to sell you access to their list. I originally subscribed to CircleID's RSS feed because they posted about some of the ICANN level politics and issues relating to management of DNS domains. I've now moved CircleID to my "probationary" list.

2) There's no personal attribution for the article (unless someone legally changed their name to "WhoisXML API").

3) The article avoids discussing the benefits of using disposable email. Not everyone considers becoming a "key email marketing metric" a goal in life. Most consider "key email metrics" as an "unwanted commodity" (i.e., being added to marketing lists that are sold and resold). Notice that I'm being nice here and not using the pronoun made famous by Monty Python?

4) There is an unsupported claim that email security solutions can further be strenghtened by filtering out disposable email solutions. This is true only if you consider "key email marketing metrics" as having value. Legitimate email domains aren't immune to email blackholes. Example: someone going to a conference might give out a "temporary email address" (in their corporate domain) that ceases to exist a few weeks after the conference closes. Justification: avoidance of extended bouts of unwanted emails.

5) The list of categories that "stood out" seems a bit selective, in that ignores the primary use case for disposable email addresses. In short, they're disposable (i.e., it's used for one specific purpose and is allowed to expire). This ignored category is used to:

  • acquire vendor's marketing fluff without becoming a "key email marketing metric"
  • acquire other information without becoming a "key email marketging metric"
  • enter in-person contests for $5 coffee mugs or sticker sets without becoming a "key email marketing metric"
  • fill out "surveys" without becoming a "key email marketing metric"
  • acquisition of other low value offerings, without becoming a "key email marketing metric"

Do you sense a common theme here? I do.

6) The hidden author's math is extremely weak. From the article: "We analyzed one such a list which, as of 31 July 2020, contained 109,352 disposable email domains. This is enough to create millions of throwaway email addresses."

Given a single email domain, over a million email addresses can be generated from a four character username limitation (a-z and 0-9, with omission of any special characters). If you do the math (36 x 36 x 36 x 36) it comes to 1,679,616 "words" that you can put on the left-hand side of the "@".

Using that same 4-character limitation on the "researched" 109,352 suspect domains, the math allows you to generate 183,669,368,832 (almost 2e+11) email accounts. That's just a little bit more that "millions of throwaway email addresses".

Bumping the username side of the email address to 6 characters results in over 2e+14 email addresses (more accurately in the 238,035,500,000,000 ballpark). Imagine what you can do with 12 or 16 character usernames!

7) WhoisXMLAPI's pricing appears a bit steep, too. For just my email adddress (a single user account in a single domain), on 23 September, I received 11 emails that the system deemed "unsolicited" and another 22 for which I wish I'd used a disposable email address. If you consider that "normal" and expand it out to a 30-day month, that's 990 undesired emails, 660 of which I have to delete manually. WhoisXMLAPI's "free" service has an upper limit of 500 queries. The next tier up allows for 2000 queries per month, at a $15/month rate. If I have two employees, that bumps me into the next tier, at $30/month.

If the query resuls are delivered via a DNS-based service, this is extremely expensive (2000 queries per month for $15?). If they're reselling information that is free, elsewhere on the Internet (SORBS, Spamcop, etc.), I have more reasons to dislike them.

I don't like their pricing plan either. They have you buy credits, which you can use in a single month. I you don't use the credits, they expire and you no longer have them. It's not their fault if you overestimate your spam load for the coming month. While this minimizes their need for customer interaction, it maximizes yours (if you worry about costs). A simple metering system would be more customer friendly.

I'd much rather worry about my own domain ending up on an email blacklist. For that, I can perform the RBL lookups myself (with a bit of code), perform those same lookups via a free web site (e.g., DNSWatch), or have someone monitor my domain (e.g., MXToolBox), all for free.

Overall, I think the article was aimed at the non-technical CIO, CSO, or CTO (yes, they do exist). The primary sales tactic seems to be the old-standby: be afraid, be very afraid. It's a bit disappointing that CircleID is promoting this stuff vice their own articles, many of which caused me to subscribe to their RSS feed years ago.

Tuesday, September 15, 2020

TT-RSS scrollbars

I like the night theme in TT-RSS. However, the width of the scrollbars are very thin. Attempting to use them are exceedingly annoying. Such is easily rectified.

The file to edit is tt-rss/themes/night.css. There are two entries that modify the width of the various scrollbars. Search for "scrollbar" and look for "width". The default width is 4px. Set it to something between 8 and 12 pixels, then refresh the web page.

Sunday, June 21, 2020

Demo-ing Dhaval Kapil's icmptunnel in Docker

A recent NCL competition included a challenge that frustrated a number of participantes, one that dealt with extraction of data from a PCAP, containing ICMP tunneling traffic (i.e., the PCAP file was provided, the goal was to extract the data to acquire the flag).

The local community college as a Cyber Club, which typically meets on Friday nights. Membership is made up of current ITN students and alumni. With the recent school closures and quarantines, the in-person meetings were cancelled. However, the "die hards" decided to move the meetings online, using Discord's voice and screen sharing capabilities. (We were already using Discord as a message server.)

There was enough frustration with the NCL challenge that four of us (from the group) attacked the problem in two parts: 1) Create an architecture in which our own PCAPs could be generated, and 2) write tools or processes that can extract/un-tunnel the data from the captured ICMP packets.

Solving problem #1 took a couple weeks, mostly due to selection of the ICMP tunnel software. There's three variants on Github. We selected Dhaval Kapil's ICMPtunnel utility (link below). Being the most stubborn in the group, I was the first to complete part 1. The configuration is easy, once you realize that English is probably not the author's first language (i.e., there are logic errors in the documentation).

I used Docker and OpenVSwitch to create the architecture (image below). To keep things simple (some people have no Docker or OpenVSwitch experience), I automated as much as possible (links much below), so that users would only need to run a couple scripts to create the architecture (one to build/pull images, another to deploy the containers and network).

The architecture simulates a network architecture where a client resides behind a firewall, which blocks "normal" traffic but allows ICMP echo requests and echo replies through the firewall. A "proxy" serves as the ICMP tunnel endpoint, which decapsulates the IP traffic from the ICMP traffic and forwards it on to the target web server.

Two others used VMs to simulate their architectures, using the same tunnel software. They were hung up on the same logic errors that had stumped my efforts. They were able to fix their architectures by looking at the Docker-based scripts.

This past Friday (yesterday), two of the others demo'd their tools (scripts) to extract content from ICMP PCAP files, produced by connecting a Wireshark sniffer into the architecture (my code includes the Wireshark container with a web interface, from LinuxServer (link below).

One Club member (DgtlCwby) has created a very tightly written Bash script, which controls tshark and walks through the process of extracting the data. It works, producing an output identical to the graphic pulled from the web server.

Another student produced a Python/Scapy script which also works. He expressed some concerns about the code, having built it from a number of online articles and wanting to improve it. This turned out to be a deep rabbit hole, into which the four of us fell, make suggestions for at least two hours past the normal "end" time. They were still tweaking the script when I bailed, to join another call.)

DgtlCwby has given me permission to generate an article based on his script, explaining each step, which is what I'll be doing in the coming days (we're all learning as we go).

Links:

Wednesday, January 1, 2020

Today's project (setting up a knockd lab for CTF training) isn't improving my opinion of Ubuntu packaging much. This isn't the first time in the past week that I've run across munged packages and old code.

The scenario for the lab is that rubber hose cryptography was employed against an evil hacker and produced the following:

  • the hacker's handle
  • his workstation password
  • a sequence numbers = 2222, 3333, 4444
  • and that an encryption key will be available on a certain port

The student will be tasked with finding the hidden server in the hacker's private network, figuring out how to open the port on the server, and obtaining the key from the open port. The unstated facts include that only nmap and netcat are available on the hacker's workstation.

In the first 30 minutes, I was able to design a Docker container that runs supervisord, knockd, socat, and an internal (to the container) version of iptables. In the subsequent hour, I'd tried various things to get knockd to properly run the close-port command. Even the configuration examples provided by the original authors didn't work. The "iptables -D" commands would work on the command line but not when called by knockd.

To make the story short, if you're using the Ubuntu knockd package, the close command will need to be wrapped in "bash -c 'the command'" before it'll work properly. I've added "patching" to my to-do list but it's near the bottom (won't be any time soon). At the top of the list is adding this instance to the OVS architecture, which resides behind a Guacamole instance, and adding a dynamic flag calculation for use in CTFd.