Between open carry laws that don’t require training or registration and a new law that incentives folks to turn in women or those who help them, Texas looks like it’s become quite the dangerous state. Churches and the government should establish refugee resettlement programs for any Texas women and their families that seek to leave. Texas is seeking to become a retirement only state. Warning: it’s hard to care for retirees without younger folks.

In a surprise shift in my career, my customer and employer is now supporting work from home. After a few weeks of working from home 4 days of 5, here are a few surprising reflections:

  • Not commuting is wonderful!
  • I can happily wear Crocs and PJs 4 days out of 5. (We have no video meetings!) That whole idea of dress for success? Doesn’t apply when you’re in the groove in code.
  • Makeup is an optional thing
  • Jumping out to the gym in the middle of the day means fewer people => more access to the weights. And not having to be at work (and no video meetings) means showering is a thing that can be done when the work day is done… (No, you don’t want to share a home office with me on workout days…)
    • Surprisingly, old people at the gym are the ones who are getting too close for my comfort in COVID times. Guess who’s at the gym in the middle of the day?
    • Note that me being at the gym in the middle of the day suggests the logical inference that I am old, which I attempt to avoid acknowledging…
  • While there are no distractions from too loud coworkers, the puppy who wants to play can consume some significant cycles that need to be accounted for in the timesheet
  • Beer can be consumed, but should only be done (1) in the evening, (2) when you’re almost done anyway, and (3) used as a stopping function. E.g., I’m on beer #2, billable time is over!

In July, I signed up to be a “fundracer” for a group doing great things in the Baltimore area. Back on My Feet is a national organization with a Baltimore affiliate. In each affiliate location, they set up running groups at local homeless shelters. They worry about making sure that running groups have structure and running partners (both residents from the shelter and from the community), help folks connect with shoes, and connect participants with employment and housing opportunities. Their model literally walks/runs alongside the folks they’re seeking to serve, committing to regularly be there with them and connect. They’ve got some impressive stats, too, in terms of numbers of folks employed and housed through the program – check out their website. The program says: “Our unique model demonstrates that if you first restore confidence, strength and self-esteem, individuals are better equipped to tackle the road ahead.” and that they “seek to engage you in the profound experience of empowering individuals to achieve what once seemed impossible through the seemingly simple act of putting one foot in front of the other.”

I’ve fundraced for BoMF before. They get entry slots in the local Baltimore Running Festival, which runs in October as a 5K, half-marathon, and marathon. I used to be more of a runner and would train for the half. I’m older and a bit less in shape than I was, with other priorities at the moment that keep me from dedicating time to build up to logging 12 mile+ training runs on weekends. But…. I can put a few fewer steps in front of the other and make the 5K (3.1 miles) happen. I’m now regularly running 2-2.5 miles during the week, with a long run on the weekend of 4 miles. I’m slow, but getting slowly faster. Using that same approach to commitment that the running club participants put in, I’m slowly seeing results. I’ll only earn success and complete the race if I keep it up, though, just as they’ll only earn their success if they keep putting in the work towards employment and housing.

If you, like me, find the approach valuable and/or inspiring, support Back on My Feet and their mission by supporting me in my fundracing. Earlier this month, I met my “goal”, which was the minimum tally to enter the race on behalf of BoMF. That said, just as your own home’s budget would appreciate any bonus amounts, so of course would BoMF’s. More $$ means abilities to support more folks and do bigger things.

Oh, did I mention? Thanks to one donor’s request, I’ll be running this a tutu, clown socks, with a clown horn and probably a goofy hat (heat dependent). Want me to up the ante somehow? Let’s talk! Want me to show up at your event in such??! Well, that’s possible, too. Although I can’t promise to run in full Clementine mode (clown shoes are _not_ a safe running option for 3.1 miles!), other events are possible…

Last link to make it easy to contribute here!

I created a new Git project on my GitHub profile today as I began some work on a possible conference presentation. I was surprised to see a message that said I’d received an achievement badge because I’d “contributed code to theĀ 2020 GitHub Archive ProgramĀ and now have a badge for it. Thank you for being part of the program!”

Clicking through the Archive program link to find out more, I saw that “On 02/02/2020 GitHub captured a snapshot of every active public repository. Those millions of repos were then archived to hardened film designed to last for 1,000 years, and stored in the GitHub Arctic Code Vault in a decommissioned coal mine deep beneath an Arctic mountain in Svalbard, Norway.”

Which sounds kind of cool, in more ways than one. However, I’m not excited about not really getting a way to opt out of that archive. Although the message on the achievement badge notification says something about being able to opt out in settings, clicking through to settings doesn’t take me anywhere that makes it clear what setting I’d need to adjust. Further, if they’ve already “archived to hardened film designed to last for 1,000 years”, thinking any setting I list now is sort of moot anyway.

This isn’t the only usage of code item GitHub’s made public lately: their new CoPilot program uses the source of public code repositories, apparently regardless of the license used by the repository owner. Starting to wonder if I need to check more seriously into Gitlab’s offerings….

Was dismayed to discover this morning that O’Reilly is no longer putting on in-person conferences, to include the wonderful OSCON conference I so enjoyed both attending and presenting for. I tripped across that news today when I went to find links to my previous talks (2014, 2016). Both talks were based around the idea of delivering the bad news that your build is broken by way of obnoxious Furby chatter. I had submitted talk topics for several years before that first talk got picked up – guess the conference review assessors similarly thought Furbies might be hard to look away from.

So, farewell, OSCON, Strata, and an abundance of other conferences. I’ve been finding my geek conference fix in other places of late, more related to cyber, and it’s not as if there isn’t an abundance of ways to learn in person and online. But OSCON will forever hold a sweet spot in my heart.

Succumbed to temptation today and bought a laptop. I’ve been thinking about it for a while. In two more weeks, I’ll need to hand back in the one I’ve been using from work. This Macbook has stood me well through college and capture the flags, and I’ll be sad to see it go, particularly since it’ll take another week after that before my new one arrives. That said, 32GB of RAM, a 1 TB NVME drive, an NVIDIA GPU with 8GB, and an AMD Ryzen chip: gotta put this poor box to shame. I’m going to have to grow my chops in reverse engineering and cyber exploitation to match it!


You may have seen a few more geek notes on here of late. I’ve really enjoyed jumping into CTFs. My objective isn’t to win, but to find more ways to solve puzzles.

This weekend’s adventures were a little different, though. My company sponsors UMBC’s CyberDawgs team, and they’ve asked us to contribute challenges to their upcoming CTF. I tasked our IRAD team with coming up with a few and I wrote a couple, as well. So this weekend I spent some normalizing our submissions’ README files and doing a final test of the submissions.

One of the submissions was really giving me trouble. The IRAD team member who’d developed it had demonstrated it to us, but the solution instructions in the README just weren’t “clicking” to then be able to reproduce a solve, much less help anyone else understand how to solve. It’s customary in CTFs to have a Discord channel where mentors can offer assistance to those on the right track; given that I don’t want to be up all night myself providing that support, thought it best to provide a walkthrough for someone else..

Not only did I “crack” it (helped, of course, by the solution instructions in his README), but then I was able to provide a linked reproducible recipe using a tool called CyberChef that is really useful for a lot of CTF grunt work. I’m avoiding linking to the recipe or giving any more info on the challenge, of course, given that there’ll be hopefully lots of folks taking a crack at it in early May. I’m now more confident, though, that there may be some folks who solve it AND I better understand a particular kind of encryption approach.

Notes from this week’s CTF – geek notes for Tina. Should have collected notes on more challenges, but, eh…

Received a PCAP file that said it had secret coordinates in it. PCAP was completely USB traffic, specific URB_INTERRUPT

  • https://wiki.osdev.org/USB_Human_Interface_Devices#USB_keyboard
  • Isolated traffic for appropriate device, after examining device descriptor response to find keyboard
  • Started mapping out the HID keys by hand, until a teammate suggested https://github.com/TeamRocketIst/ctf-usb-keyboard-parser
  • Ultimately used tshark to extract the data, via tshark -r ~/Downloads/file.pcap -Y 'usb.device_address == 2 and usb.data_len > 0 and !(usbhid.data == 00:00:00:00:00:00:00:00)' -T fields -e usbhid.data | sed 's/../:&/g' | sed 's/^://g' > keys.txt
  • (Note: the second se is because the recommended one ended up prefixing all the lines with : – second sed strips it off)

I gave a talk in November to a local high school about computer science as a career field. Aha, I think – I’ve given this talk before – I’ll just brush up my well-prepared slide deck.

My slide deck has a graphic in it that looks something like the below. All credit to Daniel van der Ende and his work on the GitHub Data Challenge in 2014. It’s an interesting way to show the various combinatrics of languages that are used in projects today. It’s actually common nowadays that a project has multiple types of code in it. Often there’ll be the front-end (often JavaScript + HTML + CSS) with some sort of back-end. The point I wanted to convey in the original presentation was that software engineers often don’t just need to know one language. I then would riff lightly one which of the languages they could see in my slide I’d worked with in some form or fashion. (In the snippet you can see of the image, Perl, Scala, Go, JavaScript, Ruby, and Lua. I did just enough of CoffeeScript to not want to do it anymore…)

Well, now it’s 2021. The slide information needs to be updated, and Mr. van der Ende has not updated his image, but he was kind enough to make available his source code and a handy README file which walks (loosely) through how to get the data.

Challenges then solved so far:

  • getting access to BigQuery
  • finding new sources of the data, since the dataset van der Ende references doesn’t seem to exist anymore
  • making BigQuery convinced that I have permission to run queries
  • updating the query to match the new data source, including figuring out how to flatten arrays – really not in his original flow
  • downloading mysql to my developer machine and setting up a database and username/password combo
  • updating van der Ende’s code to read directly from a CSV, rather than assuming I’m using a JSON file
  • getting php to work on my developer workstation – this particular box has done lots of things for me lately, but php hasn’t been one of them
  • figuring out how to populate the languages list the code asked for, given the languages represented in the dataset I downloaded. (For the record, awk, sort, uniq was the happy combo.)
  • uh, figuring out a better way to ingest the CSV, since pulling in the full file at once took up too much memory for my computer
  • (more to come undoubtedly to get it working…)

Note: I ultimately ran into enough things with it that I left the original image. Still on my todo list to bring this to resolution…

My masters classes keep sending us into Wireshark to analyze packet files. I thought I had a decent understanding of how to use Wireshark from some previous experience through work, but I keep finding new tricks as I try to figure out things about unknown protocols. Note that I’m using Wireshark 3.0.3, because that’s what’s installed in the lab infrastructure. I am aware that Wireshark 3.4 is out: my plan is to play with that version on my personal computer to see new goodies.

Copy and Paste

We keep needing to fill out spreadsheets of interesting things learned. We’re running Wireshark through a VDI infrastructure and I’m typically doing my homework on a laptop, so with limited screen real estate, even my touch typing skills aren’t helpful enough. The Copy capability in Wireshark lets me capture just the value for the field – highly useful for things like MAC addresses.

Protocol Hierarchy

Forget about randomly traversing files which including 100K packets – let the protocol hierarchy show likely interesting data points within the file. Filter by said protocol, and data patterns emerge. Worth calling out also the Conversations and Endpoints statistics areas, as well. Nice ways to get a holistic view of what’s going on in the file and what might be worth diving into.

Statistics -> …

We’re looking at SCADA pcap files, including BACnet. Delighted to find a traversal means for BACnet that let me inspect the devices and services seen in the pcap. I was less happy to see that iFix wasn’t in the list, and that Wireshark just treats it as plain TCP (again, with my older version of Wireshark, with its default set of dissectors, etc). Possibilities for expansion.

Expert Analysis

There’s a menu option for ‘Expert Analysis’ that I hadn’t played with before. Add its data, and then allow it to create filters to show just that data – voila. Evidence of TCP retransmissions? Yes, please.