My high school aged son came down at around midnight the other night to my cubby office in the basement, concerned about what had me up so late. I had my laptop open and he could see various bits of code up on my monitors. “Mom, what are you working on? Still stuff for work?”

I had a jsfiddle open, various tabs up in my web browser, and a kali Linux VM up. How to explain to my son my addiction which had me spiraling into the wee hours?

“No, closed out my work hours ago. This is coding for fun on a CTF”.

“Getting anywhere?”

“Welp, hashcat is churning, but so far it doesn’t look good, and this other stuff is me futzing on code to solve a different challenge. So, maybe?”

“Going to bed anytime soon?”

“Probably oughta, but…”

“Alright, mom. Love you. See you in the morning”

He knows me too well. A couple of hours later, I decided hashcat was not going to make it and I oughta find another approach. I left myself notes on the jsfiddle code as to next steps to try. Went to bed. Still up in time to wake him up for school. Life of a geek mom.

This past week was a crazy week. I came back from being out of the office with COVID. Felt OK, just couldn’t go back in until either a negative test or enough days had past. Monday was my first day back.

We’ve been working on a major feature for the uber project. Something that hits milestones that get reported to big-wigs. Tuesday was supposed to be the first demonstration of it – didn’t have to work fully, but the point was to get the various teams to integrate their stuff. Tech lead had been saying we were looking strong. I walk in on Monday and now the tech lead is out for COVID reasons, won’t be in all week, and has named me as his backup. OK, the demo’s looking strong, right?

In the full thread, I rant about what I walked into. Everything was broken. Meaning, some things weren’t even lined up to work: weren’t being built in the right environment, weren’t configured for deployment to that environment, etc. Worse, the things we did have built and deployed were suffering from two hairy problems not seen in our dev environment.

First, DevOps had rotated the keystores and even though we had the right location and password, our code was complaining that it couldn’t decrypt the key. Turns out they’d added an extra layer of passwords that our code wasn’t set up to handle. Scrambled to swap to an alternate form of keys that didn’t have as many layers, which meant I had to redeploy our base infrastructure. I hadn’t deployed it the first time – the tech lead had – so I was wading through deployment scripts and properties files trying to set things up correctly.

OK, averted that problem. Had it in hand well before the demo on Tuesday. The one I didn’t have as well in hand: the infrastructure relies on Docker containers running in kubernetes. We don’t launch the containers – the infrastructure does. In our dev environment, everything worked well. In our demo environment – crash and burn. The container failed to start and complained about a permissions error. The tech lead had mentioned the problem the previous week and said DevOps had fixed it. What he hadn’t realized is that they’d fixed it for a particular running container, not for the infrastructure overall. Whenever a new container got launched (because we deployed a new thing or changed the settings of a thing), we’d experience the same problem. I ended up applying DevOps’ same workaround for each container that was ready for the demo, so we’d at least have something to show.

You can read the rest of the thread for the rest of my ranting from Tuesday. I was steamed. But today’s Friday, and by Friday, I have conquered. I found a better workaround solution for the not-running containers bit, one that doesn’t require us to hand-edit k8s yaml descriptors. I got all of the things we had working in dev built and deployed in the demo environment, tuned settings to hit correct endpoints, made sure everything was running well, coordinated with other teams on what Kafka topics to use, and wrote bash scripts that make REST API calls to set up test data and trigger calls that help us show our stuff in action. Oh, and did that while coordinating with other members of the team on pressing support concerns, as well as wrote new code. (That new code isn’t done yet, but…)

Successful week! The tech lead is coming back Monday, just in time for the rescheduled demo. Given that next week’s my last week on that particular project, a great way to go out – saving their bacon in style!

I’ll be part of a panel discussion this evening for a class on “Women, Gender, and Information Technology” through UMBC. The point of the panel is to discuss career advice and women’s experiences in the tech workspace, and the questions and answers below give a taste of that. Of course, all of the answers reflect my personal experiences and opinions, shaped through some 25+ years as a practicing software developer, senior technical leader, and often team lead or people manager. I’m looking forward to hearing what the other panelists offer, as well as what questions the students may ask us directly.

If you were teaching a course on women, gender, and IT or engineering what topics would you be sure to cover?

  • Historic goof-ups: they’re memorable, occasionally funny, and help folks realize that technical folks are not all-knowing, no matter what we otherwise think.
  • Ways to collaborate in teams: code reviews, design sessions, sprint planning. That will help folks be comfortable speaking, knowing that they understand how to provide value to the team either directly or through gaining knowledge that helps them contribute more strongly
  • How to participate well in retrospectives, so that the team gains value and you gain visibility. Think through options like small incremental changes or big bang experiments
  • How you can keep learning, both within and outside of a project

Were you encouraged to pursue STEM as a child? What were your STEM elementary and high school experiences like?

I knew as a 6th grader that I wanted to program computers, so yes. We had a computer at home and I wrote programs using Turtle. My parents also paid for me to go to Gifted and Talented camps in which I chose computer programming focuses. Academic success was highly valued, and if my avenue of doing that was computer programming, they were good with that. I took AP Computer Science in high school, though opted to not sit for the exam, as I didn’t believe our program was very strong and wanted to make sure I got the full thrust of material while in undergrad.

What is something that you wish you had known as an undergraduate student?

Interestingly, I don’t think I knew loans were an option. Going into college, I understood that I either earned a scholarship or joined the military, as my parents were upfront that they weren’t paying for our college educations. Thus, the _only_ schools I looked at were in-state schools where I thought I might have good odds of earning a scholarship or being able to earn enough money through a job to pay them off. As it worked out, UMBC was an outstanding choice. But I basically defaulted into it.

I’m also very glad that I took an internship as early as I could. In my sophomore year, I worked through the Shriver Center to earn a job through a startup. In my junior and senior year, I then worked for a different startup that was housed in the on-campus business incubator. Working with those startups gave me a chance to do work that made an impact for those companies and gave me the confidence that I could get paid to deliver workable software. That gave me a context through which to approach my classes, as well: yes, I could get paid without writing an operating system from scratch. (Still needed to pass the class to get through the degree, but wasn’t going to kill my career possibilities if that wasn’t my passion.) I hear of students who don’t look for an internship until their senior year and think that they’re doing themselves a disservice.

How did you decide if industry or graduate school was the right choice for you following undergrad?

Well, that idea of not taking loans was still front and center and I’d proved through internships that I could start to make my way in the world. I opted to start my career and then assess whether I needed a graduate degree to move forward.

When I did start to look at graduate degrees, I knew folks who were working through them at work. I was doing many of the things they were doing in class as part of my work. So I opted to steer clear. I did work towards an MBA at one point, but put that aside because it wasn’t fitting well with work + having young kids. Pragmatically, I also wasn’t going to get paid more as an engineer with an MBA, at least in the line of work I’ve been in.

I finally did do a masters, just within the last few years. I had earned a position at work for which I thought I needed a new set of technical skills. I thought a focused masters would be the best approach for learning them, so went back to school in my mid-forties to get my masters in cybersecurity. I earned it this past December, though before completing it changed companies and roles so no longer feel I need it, so much as needed personally to finish it up for pride reasons.

What is the best piece of professional advice that you have received?

Lots of folks will tell you to have 3-6 months of savings stashed for an emergency. Someone once told me to save it as my “go to hell” fund. By that, they meant, if I no longer thought I was working in a healthy environment, I didn’t have to stay. I could walk out the door that day and know I had a cushion to bounce off of until I could pick up a new gig. I’ve saved the money, but never actually used it that way. There’s a mental power, though, in knowing that I could if I need to. That I can say what needs to be said or do what needs to be done, without worrying that I’ve caused my family to end up on the street. That idea of knowing that I could walk out if I needed to, that I get to make the choice to stay rather than just be stuck: that’s a powerful advantage. That power to leave also lets me be more confident in my decisions to join: if it doesn’t work, I’ll find the next thing. I keep my skills sharp, my resume up to date, and keep in contact with those I’d be interested in working with again. And if I don’t immediately find the next thing, well, again, I won’t starve.

What is one characteristic that you believe every leader should possess and why?

A sense of the skills and knowledge of her team, so that she can help them grow as individuals and as a team, as well as make promises to stakeholders re: delivery. That’s whether she’s the explicit team lead or aiding as a contributor.

This is particularly important on technical teams: I’ve too often seen team leads promise out that something can be done or maintained by their team based on what a single developer can deliver, and assuming that then multiplies out by the number of developers on the team. Lots of times that single developer isn’t able to bring the rest of the team up to their level of understanding. Sometimes that single talented developer leaves and the team has to try to maintain what they built. I had a project once where the team lead (also the most talented developer) unexpectedly passed away and the team had to try to rebuild their knowledge and meet their delivery promises. ‘Twas a most unfortunate summer all the way around.

First geek post, OK first post in a while… Been a busy year already!

The requirement: build out a notification service that can support 1M events coming through an hour, which each _could_ trigger a notification or notifications based on a ruleset of 100K rules. No small task. After some digging around, came up with an implementation that uses Drools to optimize the checking of the rules to meet the needed event pace. It builds out a rules tree which can basically decide that it only needs to parse 2K of those rules based on a particular event, rather than needing to iterate over the full 100K. Very necessary optimization.

As it turns out, 100K is more than I could get Drools to bear in a single session. Never fear, though: we have multiple rule “types”. If I can send the change event in parallel through N rule processors, each of which handles one or more rule types, well, I’ve probably divided the 100K rules up reasonably and can still meet the performance requirements. Put my rule processor on the end of a Kafka topic that gets pulled on by all of the rule processors: I’ve got parallelization that lets me meet my performance requirement.

So, today I wire key elements of our solution up in a development environment. The rule set is intended to be cached in a Redis instance, and my task today was to figure out how to deploy the Redis instance within kubernetes with an appropriate set of configurations for its workload. I’ve never used Redis before in any significant way, so this was an interesting problem: should it be a standalone single node that handles reads and writes? Should it instead use replication so writes go through one node and reads go through another set? How much CPU and memory? Let’s assume I’ll get those wrong or they’ll need to vary in environments: how do I turn on metrics and visibility into resource usage so I know when we need to change things?

Got all of that worked out for at least a first cut. I go to deploy using docker-compose as a stand-in for kubernetes. Awesome – it stands up and I figure out how to adjust our code’s database connection to use the replication model I’ve chosen (e.g., read from one node, write to another). Great – now I wire up my local environment to the thing it’s loading the data from to see how this works with whatever they’ve got loaded. Hey, it’s development – I’m expecting a paltry data set, just enough to let me show the read/write interactions work out OK.

Guess what? The “paltry data set” is 1M+ rows. Not only 1M+ rows, but 1M+ rows all of a single event type – no concept of sharding the events across different processors here, no siree. It’s either back to the drawing board or kill -9 whatever data generation process the upstream system is using. Suspecting the latter, that someone’s gotten overeager with the “we can build it so let’s pump it full!”. Will be interesting to see if the 1M+ number is larger tomorrow!

If none of the above makes sense to you, well, my non-techie analogy would be: your budget is 100K for a house. You’ve got 100K, or at least a path to get to it. But you’re surprised to discover that all of the houses in your reasonable travel area for work are 1M+. And your lease is about to expire. Find a solution!

After a day of supporting First Lego League as a judge (lotta fun!), I was delighted to see an early release of next week’s WiCys cyber challenge. The title is ‘Wireshark doo dooo do doo’ and it’s only a 50 point challenge, so I was expecting a not too difficult exercise in finding things in a network traffic file.

Not too difficult is right. After checking the file properties (and importantly, looking at the comments, as well as doing a quick search in the text of the file for the flag format of picoCTF – hey, easy finds are still points!), I looked through what the file’s protocol hierarchy said it held. Mostly HTTP, with a little bit of line-based text data. Let’s start there.

Results: two packets that returned text/html or text/plain data. The line-based text data one has a syntax that looks a lot like a flag: foo{morecraziness}.

I tried using CyberChef’s Magic decryptor, without success. I even tried telling it what I expected the first bits of the text to be (“The flag is picoCTF”) – stil no dice. I then tried an online substitution cypher breaker I’ve used before: https://www.dcode.fr/rot-cipher. In this case, the first thing it returned in its long list of possibles was ROT13, with text that said THEFLAGISPICOCTF. OK, that’s my likely winner. I went back to CyberChef and used its ROT13 recipe, figuring it’d better handle upper/lower, numbers, etc. Bingo. Flag in hand, all w/in ~15 minutes.

Going to have find some more interesting puzzles for the rest of the week…

Intriguing item in my Slack feeds this morning:

What is it: Weekly challenges from the picoCTF gym in #wsc-ctf-challenge

How it works: Each week a challenge from the picoCTF gym will be shared in #wsc-ctf-challenge with cross post to #general, the following week a solutions thread will be opened for people to discuss their solutions. (Please keep spoilers in thread for people that are solving challenges later)

Challenges will be shared Jan 23nd, Jan 29th, Feb 5th with a zoom walkthrough planned for Feb 12th.

This week’s challenge was entitled ‘Scavenger Hunt’ and led us to a very basic webpage. The hunt was on! As it looks like they’ll be releasing these on Sundays each week, I’ll see where each Sunday leads, and then write up my discoveries.

First step: inspect the HTML through developer tools. There was a comment in the primary webpage that described itself as the first _part_ of the flag. No mention of how many parts.

OK, looking further, I looked at the site’s javascript, which gave me a clue that led me to look at the site’s robot.txt file, which is used to keep Google from crawling the site. That told me I had part 3 of the flag, and mentioned that the next flag was related to the site being an Apache server.

Checked the CSS next, which showed me part 2.

This is where I was stuck for a while. I tried a number of things related to the site being an Apache server.

  • Brute force trying a few potential file paths: admin, README.txt, …
  • Tried running TRACE against the site, after reading a couple of articles [1, 2] which talked about hardening your Apache server: curl -v -X TRACE http://mercury.picoctf.net:27393
  • Tried banner grabbing via nc, since that’s one of the other things the links suggested turning off: nc {ip of box} {port}.. Then immediately followed by HEAD / HTTP/1.0 (also tried 1.1)
  • Tried fuzzing to find unrecognized files: sfuzz -S {site} -p {port} -T -f /usr/share/sfuzz-db/basic.http -L picofuzz.txt -q
    • My plan was to grep the log file for ‘flag’, ‘part’, or even regexs of chars_chars_chars, since the flag structures seemed to look like that
    • I left sfuzz running while I was at church, but it didn’t even retrieve the javascript files or robots.txt
  • Tried directory busting via gobuster dir -w /usr/share/wordlists/dirbuster/directory-list-2.3-small.txt -u http://mercury.picoctf.net:27393
    • No results

Finally I opted to look around and find some other CTF writeups that referenced Apache, in case they led to more angles. I found this one, which mentioned using dirb to identify entry points. Running dirb http://mercury.picoctf.net:27393 turned up an .htaccess file, which had a flag in it and another clue.

The next clue said: “I love making websites on my Mac, I can Store a lot of information there.” I started a looking for ways to find common Mac files. I used to have a Mac, so was used to seeing an extra file or two around, but couldn’t remember what they were called. I started looking for sample .gitignore files for Mac developers, and found this posting which mentioned that “This will ignore any files named .DS_Store, which is a common file on macOS.” Aha – now I see why store was capitalized. Sure enough, browsing for that file gave me another piece of the flag and a message that said I’d completed the scavenger hunt.

One other thing I did try which I’ll want to use again in the future: I used the tool “nikto” which exists to scan web servers for known vulnerabilities. When pointed at the hostname and port, it gave me some information about the system, including the existence of the .htaccess file. It would apparently have also pulled back the banner information from the web server.

My last set of “next goals” were:
* connect it in with Git, Github Actions, and github pages
* Pull in a collection of markdown items (ala sermon descriptions, correlated with YouTube links)
* Netlify CMS to edit markdown items, commit them into the repository, and then see the flow-through of GitHub actions -> gh-pages branch

All of the above I accomplished a few days ago, though nothing looks visually appealing. I’ve also added a navigation bar, setup navigation in between the sermon descriptions, and demonstrated that our tithe button will work so long as we’re still hosted via harundalepc.org, which we will be.

Next step: a Contact Us form. We have a very simple behavior today: there’s a form with 4 fields on it, and the backend system sends an email to the church office with the information. For a static site, though, there’s no “backend system”.

This is a known problem for static sites, and there are several options out there offering to solve the problem. Because I think our ‘Contact Us’ form gets used very minimally, I’m opting to not pay a monthly fee to any such service, though, and am instead working to get an AWS Lambda function working. At the moment, I’ve worked my way out from: I can send a test email from AWS SES (last night), and now I can use an AWS Lambda function to send email given a presumed payload. Note that these things were all new to me, and I’ve gotten the barest bone versions of them working. Now to wire up an “API Gateway” such that I’ve got a web endpoint I can trigger from our static site to then hit the Lambda function.

That _should_ be the major elements and challenges I needed to knock out on the site before being able to “just” focus on styling and putting it up somewhere for review. Getting there!

Nerderypublic.com is delivered with WordPress – no issues, but haven’t tried to do much with it.

Our church’s website is also delivered with WordPress. Arrgh. Not well built, quite the pain to work with. As the primary maintainer, wanting to get off of it. We have other reasons to want to get off of it, including that our hosting environment for the site + the church email has had issues. Ergo, this geek is looking for other solutions.

I’m leaning towards a GitHub page delivered solution, using a static site generator which looks at markdown files for entries. Experimenting with Nuxt.js at the moment, using GitHub Actions to push out the generated site, and hopefully integrating Netlify CMS to provide ease of use for contributors other than myself. We’ll see how the experiment turns out.

Progress tonight:

  • Set up a virtual machine with a Linux environment to work in
  • Installed npm
  • Created a “church-site” using the command npm init nuxt-app church-site, per the instructions on the Nuxt site
  • Choices of note:
    • JavaScript instead of TypeScript: I’ve had previous experience with TypeScript and thought it made things too obscured. Likely better by now, but still left a bad taste in my mouth
    • Package manager: NPM instead of Yarn: again, familiarity, though suspect Yarn would’ve worked just as well
    • UI framework: Bootstrap Vue: some familiarity with generic Bootstrap. No familiarity with the rest listed as options
    • Nuxt.js modules: eh, took ’em all: Axios Promis based HTTP client, Progressive Web App, Content via Git-based headless CMS. Maybe we won’t use ’em. Right now, I’m not worried about optimizing.
    • Linting tools: took ’em all .
    • Testing framework: have some past experience with Jest, none with the others. (Hey, I haven’t done Web app development in a bit…)
    • Rendering mode: Universal
    • Deployment target: Static (Static / Jamstack hosting)
    • Development tools: took ’em all
    • Continuous integration: GitHub Actions
    • Version control system: Git

Results:

  • a ‘Welcome to your Nuxt Application’ site available on localhost:3000 when I run npm run dev
  • npm run generate doesn’t complain, seems to have generated some content under perhaps static
  • Was able then to update some items in the components/Tutorial.vue component, as well as the site name in nuxt.config.js and see them reflected in the running website

Next goals:

  • connect it in with Git, Github Actions, and github pages
  • Pull in a collection of markdown items (ala sermon descriptions, correlated with YouTube links)
  • Netlify CMS to edit a markdown items, see flow through

Day 2

Got it pushed to Git, working with GitHub Actions, and auto-building to a GitHub page. It’s still the same starter system, but the auto-build has somehow opened 6 pull requests on my project to bump up various dependencies. Each of those pull requests triggered a successful auto-build.

Things I wrestled with:

  • commit-lint: sure, I’ve noticed that lots of git commits in various public repos follow a particular structure of late – something like “feature(build): some text here”. That said, was surprised when my commit wouldn’t push unless it followed that structure. I’m sure I could turn off that particular pre-commit hook, but for the moment I’m leaving it there and trying the new (to me) style.
  • prettifier: I left on all the linters for the source code originally. I was regretting that earlier when my code wouldn’t pass inspection and I couldn’t find info as to why. I finally turned off linting in the build steps. For my purposes, it may not be all that necessary: I’m not really trying to sustain a community of developers with a consistent looking code-base. For learning purposes, I’ll futz with it again in the future, but not going to let it block progress.
  • somehow my original git clone didn’t set up a remote? I had to manually add one, which confoozled me for a minute. Resolved, but… Note that this is the first time I’d set up an ssh key against Github, apparently… I thought I had before, and I’ve definitely done it regularly for Gitlab, but maybe that was the difference in procedure; the clone via ssh. Dunno. Got past it, regardless.

On Thursday, Dec 16th, I turned in my last paper for my last project for my last class of my cybersecurity degree. On Friday, December 17th, my teammate turned in the last deliverable of the project. I’m done! We’ve gotten feedback on our deliverables already (“Exceeds Expectations” – a common refrain) and a hearty “best wishes in your future endeavors” from our professor. I’m done! I’m done! The grade hasn’t posted on my transcript yet, but UMGC is holding virtual commencement exercises today. I’m walking on cloud nine, just not on a stage. I wouldn’t have walked on the stage anyway – I just wanted the achievement, not the hassle of getting to some event somewhere to be announced to people I don’t even know.

Instead, I’ll spend my weekend working with balloons for a Clementine gig this afternoon and just generally being ecstatic that I’m done!

Bit of context: in June, I changed companies and thus changed projects. In the world I work in, it can take a bit to get new accounts and be viable as a developer, so I think by mid July I was committing code into an established baseline for a monolith service. Technical architecture: Spring Boot, providing RESTful services, interacting with JPA repositories, and a smidge of interacting via Feign clients out to another service. Code is there, can be interactively debugged, etc. Plenty of meat to dig into, plenty of tools to do it with, but a decently robust codebase for a heavily used system for our customers.

Over the summer, we changed objectives. Instead of adding new capabilities to the project, we were figuring out how to safely port it to another environment, where most of the original code for the monolith wouldn’t make it just yet. E.g., rearchitecture it a bit, figure out how we could stub some things out, borrow what we could of the build system, and make it work. Our code would be developed in the new environment and imported into the old, and the goal was to be able to develop new things while not breaking things in the old. Challenging, particularly since the old thing was still moving forward with or without us. Still the same technical architecture, but less code to work with (since not all of the production code made it into our new environment). And moving targets on versioning: is our version foo+1 compatible with the production foo+1 in the production environment? Did they change something we rely upon? Note that things don’t change often in the areas we’re dealing with, but, since the production code’s model is that all things are at the same version, there’s a bit of extra strat-eg-ery to work through. And, of course, we don’t have a strongly built out test dataset or deployment infrastructure in the new environment.

We’re not quite resolved as of early October. But now we’re pivoting to a new thing. Entirely new objectives, entirely new codebase, entirely new architecture. Switch to providing multiple microservices, using Reactive programming and API calls for the microservices. Reading up this morning on Reactive programming, I was relieved to see the statement: “If you’re familiar with Spring MVC and building REST APIs, you’ll enjoy Spring WebFlux. There’s just a few basic concepts that are different.” (1). I’ve long thought Matt Raible was a good geek whisperer, from I think well back in Struts and AppFuse development days.. Matt apparently collaborates with Josh Long (@starbuxman), who wrote much of the code Matt included in that post. So I hop out to @starbuxman and see the following near the top of his feed:

Amused that “a few basic concepts that are different” could translate to “480 pages 😯”. Recognizing that reactive style programming’s been out for a few years now and is a mature construct, I’m not super worried. I do have some development background in asynchronous programming and event handling, after all, based on an interesting websockets-based web user interface I built out a few years ago. Still hoping that 480 pages of “Reactive Spring” is really a rehash of “everything you otherwise need to generally know about Spring” with a few extra Reactive details. Else I’ll start keeping this emoji ( ☢️ ) a little closer at hand and “reactive” might start referring to my facial expression when we get the next new shift in direction.