My husband recently installed Opera on our computer. Haven’t used it much: I guess I’m used to my Internet Explorer, so haven’t wandered afield. But, since he had an Opera window up, I hit our website to see if Jas had posted anything fresh. (Nope, he hadn’t.) Then I hit my side of the site just to confirm that it looks good in Opera. Horror of horrors, it doesn’t lay out properly at all in Opera. Note that my layout is based off of stylesheets. I’m aware that IE doesn’t always conform to the spec and so things that work fine in IE don’t always work fine elsewhere. But this is the first that I’d been hit with my stuff not working. So, for anyone looking at this in Opera, my apologies. . . It will be fixed.

Early on, I knew I was going to program computers when I grew up. For our sixth grade graduation, our class sang a song listing the careers we’d have when we grew up, and my poor music teacher had to stuff “computer programmer” into the lyrics. I spent time reading books like Isaac Asimov’s I, Robot and a book series about a group called the AI Gang. In these books, robots interacted with humans, and had some manner of intelligence. In the more interesting of the Asimov stories, the robots had some understanding of their own existence, and of how important it was to be aware that they existed. I was certain that by the time I grew up, I’d be working on thinking computers, either building the first ones, or dramatically expanding what a robot could do or understand.

Eventually I realized that the field of artificial intelligence is in a very rudimentary state, at least as contrasted with the idea of self-awareness. (Self-awareness and what that means could be a very long blog entry in and of itself. . . neat topic to grapple with). Working in the field of AI would mean long hours of research with very little reward, as measured against the end goal. So, I bagged the idea of AI work, and instead enjoyed the fruits of systems development and software construction work.

My views on AI have shifted- I no longer believe that truly intelligent computers will ever exist. God blessed man with a gift, and I don’t believe it will ever be in man’s power to create a computer with that same capability (note that man was thrown out of Eden for eating from the tree of knowledge). But I do think that in pursuing the boundaries of what we can do, we better appreciate and wonder at the things we will never be able to do.

In that vein, two projects have caught my attention lately. One’s called A.L.I.C.E. . It’s an open-source markup language and bot engine that allows folks to create a free natural language artificial intelligence chat robot. In other words, a computer you can talk with and that would respond appropriately. (Note that I don’t say intelligently, as it has no true understanding, per se, of the conversation.) Wow! Theoretically, in addition to giving appropriate conversational responses, you could tie in system triggers that might even be parameterized with information given from the conversation. So you could tell the computer something, using conversational language, and have it react and cause other things to occur. Have it mine the conversations and their results, and now it has more information with which to inform future conversations. The computer wouldn’t be self-aware, but its future reactions could learn from previous ones.

The second project is run out of the National Library of Medicine, which runs all sorts of neat projects. The specific project is called WebMIRS. It’s basically a tool for accessing certains sets of medical survey data. Pretty basic data access application, but it has some exciting future goals. Essentially, the folks at NLM are interested in having the application recognize various medically interesting things, such as fused vertebrae or vertebrae with bone spurs, by evaluating the image data in X-rays. So, I could type in a query like, “return all data where the spine has some contusion in vertebrae 4″ and the computer would translate that query request into some evaluation of the image data. The human brain makes some sort of qualitative judgement, comparing what it knows of what contusions look like on vertebraes with the picture it’s examining now. But how do we tell a computer to make such a recognition?? We’d be teaching a computer to translate the bits and bytes that make up the image into some picture of what a particular vertebra looks like, and then telling it to compare it to what contusioned vertebrae generally looks like – to have some understanding of the contents and context of a picture. Wow!

Exciting stuff! And all too much for my tired brain to handle right now. . . My own system’s going to retreat to bed and run whatever screensaver/dream that’s currently queued up for me.

I’m a software engineer, and a darn good one. I love working with customers, designing systems to meet their needs, and then building systems that exceed what they expected. (“You mean you’ve already thought ahead to what I might need here, and have made it easy/cheaper to add this functionality?” “You mean I shouldn’t expect the beta period to be bug-ridden?” “You’re actually on time/on budget with my project?”)

Lately those talents have had to lie dormant. We had our first child a few months ago, and since I was nursing, it made sense for me to be the one to stay home with our child, rather than my husband staying home. There was pretty much no other reason – we do basically the same thing, could live on either one of our salaries, neither one of us is by nature a child abuser. . .
So, my husband is at work. I’m at home. Now I’m looking for part-time computer software work. You’d think that that’s a unicorn or some other mythic creature. The general reaction has been the same: there’s really no part-time work to be had in software development.

The impact on me has been near total discouragement. I like what I do and don’t enjoy the idea of giving it up. But think of the impact of this phenomenon on women in computing. . . women in software have no real choice but to put their children into day care. Those of us that want to spend at least some of their earliest most formative years with our children need to step away from software development. And need I mention that the four or five years it’ll take my child to get to school is a lifetime in software engineering?? Folks with four or five years experience in an area are considered senior, because that’s usually as long as that particular technology has been around. The technologies that I work with will either not exist or will have evolved into something similar in name only to what’s there now (think VB.NET vs. VB).

Sure, I can keep up in the trade journals, buy the latest geek books, try things out on our home computer, even pick up a degree or a certification or two. But as any employer (or employee, if they’re truthful) will tell you, that just doesn’t measure up to real-world experience. If I’m able to get back in (and that’s a big if), I’ll be at the bottom rung of the expertise ladder again, worth very little to a potential employer, as compared to where I’d be if I’d spent those five years in the field, even on a part-time basis. It’s as if the bottom just dropped out on my career, just about negating everything I’ve done/learned/earned to date. I’ve spoken with other women who’ve gone through this same thing, and the basic conclusion has been that it’s too hard to get back in, and just not worth it to start back at the bottom again, proving yourself all over again.

Just think of all those women who’ve gone down that same greased chute. And then wonder why there are so few women in IT. Maybe we got in and then got dumped out. Or maybe we’re advising our daughters/friends to choose careers more conducive to concentrating on your child for a while.

Folks keep telling me that these years go so fast, that I should savor my child’s development while I can, and focus again on my career once my child goes to school. Wonderful advice for some fields – but a seeming near death knell for a software development career.