The customer says: we need to be able to add data to the system.
The software engineer says: No problem. Add new rows to the database.
The customer says: But we’ll need new data fields – we need to be able to say that our data now has new attributes, and the GUI has to be able to handle that without coding changes.
The software engineer scratches her head for a while (a long while) and finally says: Eureka! I’ve got it! I am invincible! [Our software engineer is a fan of the GoldenEye James Bond movie . . . insert your own mental image of our heroine in the appropriate pose.] We’ll just write the code in this convoluted way – and your system will be flexible to the nth degree.
The customer says: But I need it flexible to the n+Zth degree! I want to also stretch it in this hitherto unknown way that nearly approaches the bounds of intelligence in computing. The system has to be flexible enough to allow me to add new concepts and entities to it, and relationships between entities that not even Solomon could comprehend, without coding changes.
Interjection of paying client, since the customer is the associate of the paying client, but isn’t actually paying for the work: Oh, and you need to do it based on this code base here – that knows about 1 entity and no relationships, so that we can retrofit this into this earlier system.
The software engineer says:(in a professional and wise tone with her fingers in her ears and her tongue out) Pfffffft.

[The previous blog entry is based upon a true story. The names have changed to protect the guilty-as-all-get-out parties. The software engineer regrets any opportunities missed to say ‘Pfffffft’ in real life. And waits for inspiration. And hopes it comes in the form of very good beer.]

PDA – in this case, personal digital assistant, rather than public display of affection. Though, if I win my latest auction at UBid for a Handspring Treo 90 , I may have to kiss the delivery man when it arrives.

I had an early model Palm Pilot several years ago, and loved it. Being the frugal person that I am, I had bought it off a friend of mine who had to get the newest, greatest version of the Palm. Thus, I got a Palm at a great price, and he got a subsidization for his latest geek fix. Worked out well all around. But, then I dropped my Palm and busted its screen. And rather than buy a new Palm I decided to stick to a paper organizer format.

For better than a year, though, I’ve had this yearning to return to electronic. For 2001’s Christmas, I convinced myself that until I knew better that I wouldn’t drop a PDA, I’d better stick to a DayTimer system, so my hubby got me an organizer set for Christmas. And it worked well. Still works well. But I still keep thinking I could do so much better with an electronic system. I could keep my work schedule synchronized with my personal schedule better (since my work schedule’s kept on Outlook); I could better keep track of the countless email addresses and other contact info I may need at my disparate locations (two offices for work, plus home); I could more easily. .. The lists keeps getting bigger.

So, I’ve finally decided that the Treo’s the one for me. It’ll synchronize with the applications that I want it to, it has a reasonable amount of memory and a color screen, it has its own built-in keyboard, and it has the ability to expand it to do other schtuff as necessary through an expansion slot. Now I’m seeking it at a more reasonable price than its nearly $300 retail. (Remember, I dropped the last one. . .) Desperately crossing my fingers that the one I’m bidding on at UBid winds up with me. Tried this earlier this week and got bid out at the last minute. The auction’s over in 30 minutes, so here’s hoping that Cora continues to snooze for those thirty minutes so that I can have a shot at getting that Treo.

Ran across a reference to James Gosling and what he’s up to on Craig Larman’s site. . . (I have this habit of seeking out famous software folks’ websites – typically they’ve got lots of interesting articles and resources on ’em, and sometimes sneak peeks at their books). Turns out Mr. Java is building a new development system. From Sun’s site,
“Ace technology enables developers to simplify and automate the development of enterprise Java applications, create applications that are easy to migrate from one architecture to another, and optimize performance and scalability”. The site claims to have replicated a system (the Java Pet Store) that originally required ~14000 lines of code and six months development time, in 224 lines of hand-written code and one week.

I’m interested, but not biting yet. Memories of bad experiences with another code-generation tool called Versata come to mind. I’ve never yet found any sort of tool that’s as inventive in its ability to both create business problems and solve them than the human mind. Code-generators have to play by rules; humans don’t. But if I can convince our CIO to give someone (me, maybe?) some free time to build a real app with it, maybe I could be pleasantly surprised.

Geek alert: the following will not be useful to most folks, but serves as a handy area for me to dump some things that I’m going to want later for various projects that are stewing in my brain. If you’re interested in seeing just how much of a geek I am, this’ll give you a peek.

* The Design Patterns Java Companion: had a copy of this years ago when worked through a Design Patterns study group. Had since lost or lent my printed copy… glad to find it again.
* An RSS FAQ: wrestling with a way to let a bunch of us locally publish, and then release to a central repository or reference point. If I get it working, I’ll give a better description.
* Bitter Java – nonprintable version of book. Useful if you’re deciding whether to buy it.

Cynicism among engineers isn’t a character flaw. It is key to their strength. And for the Dilbert view. . .

Cynicism reigns! “I will worship no more false [optimists].” – misquote of The Tempest’s Caliban [actually, the misquote started as a pure misquote in a 12th grade English paper – a paper explaining/reference a quote that apparently doesn’t exist in The Tempest, by Caliban or any other character. Hey, it was an in-class writing where we couldn’t reference the play – not an intentional whole-cloth misrepresentation of the play]

[My apologies for the mild incoherence of this entry. . . Too much caffeine already.]

For machines that supposedly have no true intelligence, computers are the most infernally arrogant personalities that I have ever met! I suspect that that’s why I like working with/on them: by crafting a well-designed, well-implemented (both are important!) program, I solve both the problem at hand and clamp down on any future insurrections (via bugs introduced later through program revisions). Lately, though, I’ve felt like I’ve been losing the battle. I’ve been working on a GIS system on the one project, and on a project involving servlets and web services on the other. I’ve run into so many interesting ways to blunder that I created a HardKnocks document into which I’ve been pouring my notes for the next hapless adventurer in Java web services. The GIS system is deployed on a Solaris machine – completely different than administering a Windows machine. Very interesting. . . and I’m getting the crash refresher course in all the Unix command stuff I briefly learned in college to figure out such things as why an 18MB download doesn’t fit on a drive that has 40MB+ free space.

I used to have (still have?) a cartoon somewhere in which a programmer is standing in front of a large (think full room-sized) mainframe. The gentleman is holding a bell, a candle, and a knife, and appears to be sacrificing a woman to the infernal machine. I found the cartoon amusing ten years ago as a newbie developer, and find it even more amusing now that I’m quite a bit more seasoned. We don’t sacrifice women, but we do sacrifice time and stress; we don’t give the machine offerings of food, but we offer it more RAM and disk space; we don’t read tomes of prophecy, but we do pore over volumes of design patterns, language references, and coding techniques.

Think I’ll try to dig up that cartoon and hang it in my office. . . maybe with a candle next to it.

My husband recently installed Opera on our computer. Haven’t used it much: I guess I’m used to my Internet Explorer, so haven’t wandered afield. But, since he had an Opera window up, I hit our website to see if Jas had posted anything fresh. (Nope, he hadn’t.) Then I hit my side of the site just to confirm that it looks good in Opera. Horror of horrors, it doesn’t lay out properly at all in Opera. Note that my layout is based off of stylesheets. I’m aware that IE doesn’t always conform to the spec and so things that work fine in IE don’t always work fine elsewhere. But this is the first that I’d been hit with my stuff not working. So, for anyone looking at this in Opera, my apologies. . . It will be fixed.

Early on, I knew I was going to program computers when I grew up. For our sixth grade graduation, our class sang a song listing the careers we’d have when we grew up, and my poor music teacher had to stuff “computer programmer” into the lyrics. I spent time reading books like Isaac Asimov’s I, Robot and a book series about a group called the AI Gang. In these books, robots interacted with humans, and had some manner of intelligence. In the more interesting of the Asimov stories, the robots had some understanding of their own existence, and of how important it was to be aware that they existed. I was certain that by the time I grew up, I’d be working on thinking computers, either building the first ones, or dramatically expanding what a robot could do or understand.

Eventually I realized that the field of artificial intelligence is in a very rudimentary state, at least as contrasted with the idea of self-awareness. (Self-awareness and what that means could be a very long blog entry in and of itself. . . neat topic to grapple with). Working in the field of AI would mean long hours of research with very little reward, as measured against the end goal. So, I bagged the idea of AI work, and instead enjoyed the fruits of systems development and software construction work.

My views on AI have shifted- I no longer believe that truly intelligent computers will ever exist. God blessed man with a gift, and I don’t believe it will ever be in man’s power to create a computer with that same capability (note that man was thrown out of Eden for eating from the tree of knowledge). But I do think that in pursuing the boundaries of what we can do, we better appreciate and wonder at the things we will never be able to do.

In that vein, two projects have caught my attention lately. One’s called A.L.I.C.E. . It’s an open-source markup language and bot engine that allows folks to create a free natural language artificial intelligence chat robot. In other words, a computer you can talk with and that would respond appropriately. (Note that I don’t say intelligently, as it has no true understanding, per se, of the conversation.) Wow! Theoretically, in addition to giving appropriate conversational responses, you could tie in system triggers that might even be parameterized with information given from the conversation. So you could tell the computer something, using conversational language, and have it react and cause other things to occur. Have it mine the conversations and their results, and now it has more information with which to inform future conversations. The computer wouldn’t be self-aware, but its future reactions could learn from previous ones.

The second project is run out of the National Library of Medicine, which runs all sorts of neat projects. The specific project is called WebMIRS. It’s basically a tool for accessing certains sets of medical survey data. Pretty basic data access application, but it has some exciting future goals. Essentially, the folks at NLM are interested in having the application recognize various medically interesting things, such as fused vertebrae or vertebrae with bone spurs, by evaluating the image data in X-rays. So, I could type in a query like, “return all data where the spine has some contusion in vertebrae 4” and the computer would translate that query request into some evaluation of the image data. The human brain makes some sort of qualitative judgement, comparing what it knows of what contusions look like on vertebraes with the picture it’s examining now. But how do we tell a computer to make such a recognition?? We’d be teaching a computer to translate the bits and bytes that make up the image into some picture of what a particular vertebra looks like, and then telling it to compare it to what contusioned vertebrae generally looks like – to have some understanding of the contents and context of a picture. Wow!

Exciting stuff! And all too much for my tired brain to handle right now. . . My own system’s going to retreat to bed and run whatever screensaver/dream that’s currently queued up for me.