Intriguing item in my Slack feeds this morning:

What is it: Weekly challenges from the picoCTF gym in #wsc-ctf-challenge

How it works: Each week a challenge from the picoCTF gym will be shared in #wsc-ctf-challenge with cross post to #general, the following week a solutions thread will be opened for people to discuss their solutions. (Please keep spoilers in thread for people that are solving challenges later)

Challenges will be shared Jan 23nd, Jan 29th, Feb 5th with a zoom walkthrough planned for Feb 12th.

This week’s challenge was entitled ‘Scavenger Hunt’ and led us to a very basic webpage. The hunt was on! As it looks like they’ll be releasing these on Sundays each week, I’ll see where each Sunday leads, and then write up my discoveries.

First step: inspect the HTML through developer tools. There was a comment in the primary webpage that described itself as the first _part_ of the flag. No mention of how many parts.

OK, looking further, I looked at the site’s javascript, which gave me a clue that led me to look at the site’s robot.txt file, which is used to keep Google from crawling the site. That told me I had part 3 of the flag, and mentioned that the next flag was related to the site being an Apache server.

Checked the CSS next, which showed me part 2.

This is where I was stuck for a while. I tried a number of things related to the site being an Apache server.

  • Brute force trying a few potential file paths: admin, README.txt, …
  • Tried running TRACE against the site, after reading a couple of articles [1, 2] which talked about hardening your Apache server: curl -v -X TRACE http://mercury.picoctf.net:27393
  • Tried banner grabbing via nc, since that’s one of the other things the links suggested turning off: nc {ip of box} {port}.. Then immediately followed by HEAD / HTTP/1.0 (also tried 1.1)
  • Tried fuzzing to find unrecognized files: sfuzz -S {site} -p {port} -T -f /usr/share/sfuzz-db/basic.http -L picofuzz.txt -q
    • My plan was to grep the log file for ‘flag’, ‘part’, or even regexs of chars_chars_chars, since the flag structures seemed to look like that
    • I left sfuzz running while I was at church, but it didn’t even retrieve the javascript files or robots.txt
  • Tried directory busting via gobuster dir -w /usr/share/wordlists/dirbuster/directory-list-2.3-small.txt -u http://mercury.picoctf.net:27393
    • No results

Finally I opted to look around and find some other CTF writeups that referenced Apache, in case they led to more angles. I found this one, which mentioned using dirb to identify entry points. Running dirb http://mercury.picoctf.net:27393 turned up an .htaccess file, which had a flag in it and another clue.

The next clue said: “I love making websites on my Mac, I can Store a lot of information there.” I started a looking for ways to find common Mac files. I used to have a Mac, so was used to seeing an extra file or two around, but couldn’t remember what they were called. I started looking for sample .gitignore files for Mac developers, and found this posting which mentioned that “This will ignore any files named .DS_Store, which is a common file on macOS.” Aha – now I see why store was capitalized. Sure enough, browsing for that file gave me another piece of the flag and a message that said I’d completed the scavenger hunt.

One other thing I did try which I’ll want to use again in the future: I used the tool “nikto” which exists to scan web servers for known vulnerabilities. When pointed at the hostname and port, it gave me some information about the system, including the existence of the .htaccess file. It would apparently have also pulled back the banner information from the web server.

My last set of “next goals” were:
* connect it in with Git, Github Actions, and github pages
* Pull in a collection of markdown items (ala sermon descriptions, correlated with YouTube links)
* Netlify CMS to edit markdown items, commit them into the repository, and then see the flow-through of GitHub actions -> gh-pages branch

All of the above I accomplished a few days ago, though nothing looks visually appealing. I’ve also added a navigation bar, setup navigation in between the sermon descriptions, and demonstrated that our tithe button will work so long as we’re still hosted via harundalepc.org, which we will be.

Next step: a Contact Us form. We have a very simple behavior today: there’s a form with 4 fields on it, and the backend system sends an email to the church office with the information. For a static site, though, there’s no “backend system”.

This is a known problem for static sites, and there are several options out there offering to solve the problem. Because I think our ‘Contact Us’ form gets used very minimally, I’m opting to not pay a monthly fee to any such service, though, and am instead working to get an AWS Lambda function working. At the moment, I’ve worked my way out from: I can send a test email from AWS SES (last night), and now I can use an AWS Lambda function to send email given a presumed payload. Note that these things were all new to me, and I’ve gotten the barest bone versions of them working. Now to wire up an “API Gateway” such that I’ve got a web endpoint I can trigger from our static site to then hit the Lambda function.

That _should_ be the major elements and challenges I needed to knock out on the site before being able to “just” focus on styling and putting it up somewhere for review. Getting there!

Nerderypublic.com is delivered with WordPress – no issues, but haven’t tried to do much with it.

Our church’s website is also delivered with WordPress. Arrgh. Not well built, quite the pain to work with. As the primary maintainer, wanting to get off of it. We have other reasons to want to get off of it, including that our hosting environment for the site + the church email has had issues. Ergo, this geek is looking for other solutions.

I’m leaning towards a GitHub page delivered solution, using a static site generator which looks at markdown files for entries. Experimenting with Nuxt.js at the moment, using GitHub Actions to push out the generated site, and hopefully integrating Netlify CMS to provide ease of use for contributors other than myself. We’ll see how the experiment turns out.

Progress tonight:

  • Set up a virtual machine with a Linux environment to work in
  • Installed npm
  • Created a “church-site” using the command npm init nuxt-app church-site, per the instructions on the Nuxt site
  • Choices of note:
    • JavaScript instead of TypeScript: I’ve had previous experience with TypeScript and thought it made things too obscured. Likely better by now, but still left a bad taste in my mouth
    • Package manager: NPM instead of Yarn: again, familiarity, though suspect Yarn would’ve worked just as well
    • UI framework: Bootstrap Vue: some familiarity with generic Bootstrap. No familiarity with the rest listed as options
    • Nuxt.js modules: eh, took ’em all: Axios Promis based HTTP client, Progressive Web App, Content via Git-based headless CMS. Maybe we won’t use ’em. Right now, I’m not worried about optimizing.
    • Linting tools: took ’em all .
    • Testing framework: have some past experience with Jest, none with the others. (Hey, I haven’t done Web app development in a bit…)
    • Rendering mode: Universal
    • Deployment target: Static (Static / Jamstack hosting)
    • Development tools: took ’em all
    • Continuous integration: GitHub Actions
    • Version control system: Git

Results:

  • a ‘Welcome to your Nuxt Application’ site available on localhost:3000 when I run npm run dev
  • npm run generate doesn’t complain, seems to have generated some content under perhaps static
  • Was able then to update some items in the components/Tutorial.vue component, as well as the site name in nuxt.config.js and see them reflected in the running website

Next goals:

  • connect it in with Git, Github Actions, and github pages
  • Pull in a collection of markdown items (ala sermon descriptions, correlated with YouTube links)
  • Netlify CMS to edit a markdown items, see flow through

Day 2

Got it pushed to Git, working with GitHub Actions, and auto-building to a GitHub page. It’s still the same starter system, but the auto-build has somehow opened 6 pull requests on my project to bump up various dependencies. Each of those pull requests triggered a successful auto-build.

Things I wrestled with:

  • commit-lint: sure, I’ve noticed that lots of git commits in various public repos follow a particular structure of late – something like “feature(build): some text here”. That said, was surprised when my commit wouldn’t push unless it followed that structure. I’m sure I could turn off that particular pre-commit hook, but for the moment I’m leaving it there and trying the new (to me) style.
  • prettifier: I left on all the linters for the source code originally. I was regretting that earlier when my code wouldn’t pass inspection and I couldn’t find info as to why. I finally turned off linting in the build steps. For my purposes, it may not be all that necessary: I’m not really trying to sustain a community of developers with a consistent looking code-base. For learning purposes, I’ll futz with it again in the future, but not going to let it block progress.
  • somehow my original git clone didn’t set up a remote? I had to manually add one, which confoozled me for a minute. Resolved, but… Note that this is the first time I’d set up an ssh key against Github, apparently… I thought I had before, and I’ve definitely done it regularly for Gitlab, but maybe that was the difference in procedure; the clone via ssh. Dunno. Got past it, regardless.

On Thursday, Dec 16th, I turned in my last paper for my last project for my last class of my cybersecurity degree. On Friday, December 17th, my teammate turned in the last deliverable of the project. I’m done! We’ve gotten feedback on our deliverables already (“Exceeds Expectations” – a common refrain) and a hearty “best wishes in your future endeavors” from our professor. I’m done! I’m done! The grade hasn’t posted on my transcript yet, but UMGC is holding virtual commencement exercises today. I’m walking on cloud nine, just not on a stage. I wouldn’t have walked on the stage anyway – I just wanted the achievement, not the hassle of getting to some event somewhere to be announced to people I don’t even know.

Instead, I’ll spend my weekend working with balloons for a Clementine gig this afternoon and just generally being ecstatic that I’m done!

Bit of context: in June, I changed companies and thus changed projects. In the world I work in, it can take a bit to get new accounts and be viable as a developer, so I think by mid July I was committing code into an established baseline for a monolith service. Technical architecture: Spring Boot, providing RESTful services, interacting with JPA repositories, and a smidge of interacting via Feign clients out to another service. Code is there, can be interactively debugged, etc. Plenty of meat to dig into, plenty of tools to do it with, but a decently robust codebase for a heavily used system for our customers.

Over the summer, we changed objectives. Instead of adding new capabilities to the project, we were figuring out how to safely port it to another environment, where most of the original code for the monolith wouldn’t make it just yet. E.g., rearchitecture it a bit, figure out how we could stub some things out, borrow what we could of the build system, and make it work. Our code would be developed in the new environment and imported into the old, and the goal was to be able to develop new things while not breaking things in the old. Challenging, particularly since the old thing was still moving forward with or without us. Still the same technical architecture, but less code to work with (since not all of the production code made it into our new environment). And moving targets on versioning: is our version foo+1 compatible with the production foo+1 in the production environment? Did they change something we rely upon? Note that things don’t change often in the areas we’re dealing with, but, since the production code’s model is that all things are at the same version, there’s a bit of extra strat-eg-ery to work through. And, of course, we don’t have a strongly built out test dataset or deployment infrastructure in the new environment.

We’re not quite resolved as of early October. But now we’re pivoting to a new thing. Entirely new objectives, entirely new codebase, entirely new architecture. Switch to providing multiple microservices, using Reactive programming and API calls for the microservices. Reading up this morning on Reactive programming, I was relieved to see the statement: “If you’re familiar with Spring MVC and building REST APIs, you’ll enjoy Spring WebFlux. There’s just a few basic concepts that are different.” (1). I’ve long thought Matt Raible was a good geek whisperer, from I think well back in Struts and AppFuse development days.. Matt apparently collaborates with Josh Long (@starbuxman), who wrote much of the code Matt included in that post. So I hop out to @starbuxman and see the following near the top of his feed:

Amused that “a few basic concepts that are different” could translate to “480 pages 😯”. Recognizing that reactive style programming’s been out for a few years now and is a mature construct, I’m not super worried. I do have some development background in asynchronous programming and event handling, after all, based on an interesting websockets-based web user interface I built out a few years ago. Still hoping that 480 pages of “Reactive Spring” is really a rehash of “everything you otherwise need to generally know about Spring” with a few extra Reactive details. Else I’ll start keeping this emoji ( ☢️ ) a little closer at hand and “reactive” might start referring to my facial expression when we get the next new shift in direction.

I created a new Git project on my GitHub profile today as I began some work on a possible conference presentation. I was surprised to see a message that said I’d received an achievement badge because I’d “contributed code to the 2020 GitHub Archive Program and now have a badge for it. Thank you for being part of the program!”

Clicking through the Archive program link to find out more, I saw that “On 02/02/2020 GitHub captured a snapshot of every active public repository. Those millions of repos were then archived to hardened film designed to last for 1,000 years, and stored in the GitHub Arctic Code Vault in a decommissioned coal mine deep beneath an Arctic mountain in Svalbard, Norway.”

Which sounds kind of cool, in more ways than one. However, I’m not excited about not really getting a way to opt out of that archive. Although the message on the achievement badge notification says something about being able to opt out in settings, clicking through to settings doesn’t take me anywhere that makes it clear what setting I’d need to adjust. Further, if they’ve already “archived to hardened film designed to last for 1,000 years”, thinking any setting I list now is sort of moot anyway.

This isn’t the only usage of code item GitHub’s made public lately: their new CoPilot program uses the source of public code repositories, apparently regardless of the license used by the repository owner. Starting to wonder if I need to check more seriously into Gitlab’s offerings….

Was dismayed to discover this morning that O’Reilly is no longer putting on in-person conferences, to include the wonderful OSCON conference I so enjoyed both attending and presenting for. I tripped across that news today when I went to find links to my previous talks (2014, 2016). Both talks were based around the idea of delivering the bad news that your build is broken by way of obnoxious Furby chatter. I had submitted talk topics for several years before that first talk got picked up – guess the conference review assessors similarly thought Furbies might be hard to look away from.

So, farewell, OSCON, Strata, and an abundance of other conferences. I’ve been finding my geek conference fix in other places of late, more related to cyber, and it’s not as if there isn’t an abundance of ways to learn in person and online. But OSCON will forever hold a sweet spot in my heart.

Succumbed to temptation today and bought a laptop. I’ve been thinking about it for a while. In two more weeks, I’ll need to hand back in the one I’ve been using from work. This Macbook has stood me well through college and capture the flags, and I’ll be sad to see it go, particularly since it’ll take another week after that before my new one arrives. That said, 32GB of RAM, a 1 TB NVME drive, an NVIDIA GPU with 8GB, and an AMD Ryzen chip: gotta put this poor box to shame. I’m going to have to grow my chops in reverse engineering and cyber exploitation to match it!


You may have seen a few more geek notes on here of late. I’ve really enjoyed jumping into CTFs. My objective isn’t to win, but to find more ways to solve puzzles.

This weekend’s adventures were a little different, though. My company sponsors UMBC’s CyberDawgs team, and they’ve asked us to contribute challenges to their upcoming CTF. I tasked our IRAD team with coming up with a few and I wrote a couple, as well. So this weekend I spent some normalizing our submissions’ README files and doing a final test of the submissions.

One of the submissions was really giving me trouble. The IRAD team member who’d developed it had demonstrated it to us, but the solution instructions in the README just weren’t “clicking” to then be able to reproduce a solve, much less help anyone else understand how to solve. It’s customary in CTFs to have a Discord channel where mentors can offer assistance to those on the right track; given that I don’t want to be up all night myself providing that support, thought it best to provide a walkthrough for someone else..

Not only did I “crack” it (helped, of course, by the solution instructions in his README), but then I was able to provide a linked reproducible recipe using a tool called CyberChef that is really useful for a lot of CTF grunt work. I’m avoiding linking to the recipe or giving any more info on the challenge, of course, given that there’ll be hopefully lots of folks taking a crack at it in early May. I’m now more confident, though, that there may be some folks who solve it AND I better understand a particular kind of encryption approach.

I gave a talk in November to a local high school about computer science as a career field. Aha, I think – I’ve given this talk before – I’ll just brush up my well-prepared slide deck.

My slide deck has a graphic in it that looks something like the below. All credit to Daniel van der Ende and his work on the GitHub Data Challenge in 2014. It’s an interesting way to show the various combinatrics of languages that are used in projects today. It’s actually common nowadays that a project has multiple types of code in it. Often there’ll be the front-end (often JavaScript + HTML + CSS) with some sort of back-end. The point I wanted to convey in the original presentation was that software engineers often don’t just need to know one language. I then would riff lightly one which of the languages they could see in my slide I’d worked with in some form or fashion. (In the snippet you can see of the image, Perl, Scala, Go, JavaScript, Ruby, and Lua. I did just enough of CoffeeScript to not want to do it anymore…)

Well, now it’s 2021. The slide information needs to be updated, and Mr. van der Ende has not updated his image, but he was kind enough to make available his source code and a handy README file which walks (loosely) through how to get the data.

Challenges then solved so far:

  • getting access to BigQuery
  • finding new sources of the data, since the dataset van der Ende references doesn’t seem to exist anymore
  • making BigQuery convinced that I have permission to run queries
  • updating the query to match the new data source, including figuring out how to flatten arrays – really not in his original flow
  • downloading mysql to my developer machine and setting up a database and username/password combo
  • updating van der Ende’s code to read directly from a CSV, rather than assuming I’m using a JSON file
  • getting php to work on my developer workstation – this particular box has done lots of things for me lately, but php hasn’t been one of them
  • figuring out how to populate the languages list the code asked for, given the languages represented in the dataset I downloaded. (For the record, awk, sort, uniq was the happy combo.)
  • uh, figuring out a better way to ingest the CSV, since pulling in the full file at once took up too much memory for my computer
  • (more to come undoubtedly to get it working…)

Note: I ultimately ran into enough things with it that I left the original image. Still on my todo list to bring this to resolution…