An Evening at Coupage in Seattle

I had a terrific evening last night at Coupage, here in Seattle. The restaurant, located in Madrona close by the Hi-Spot, blends Korean and classical French cuisine, and is the first Seattle effort of Portland chef Tom Hurley, along with chef Rachel Yang. I recommend it very highly; last night’s meal was perhaps the best food experience I’ve had in Seattle in a long time — possibly since my first revelatory evening combing the menu at Lark.

Walking up to the restaurant along 34th, I could smell grilling meat a block away. Getting closer, it turns out Hurley had a Weber kettle out on the sidewalk and was grilling Kobe beef and some chickens as specials. His plan is to add more grill capacity, both here at Coupage and his new upcoming restaurant downtown. When he does, make reservations immediately because this man can grill.

I dined with Marc and Bill, a couple of friends from our tasting group and both aficionados of white Burgundy. They took care of white wines, with a “starter” Coche-Dury 1996 Meursault, followed by two Leflaive Chevalier-Montrachets: 1985 and 1988. All three wines were stunning, but for me the 1988 was absolutely a standout: dense, creamy, spicy, lush, but still possessed of a crisp minerality and good acid. Pretty darned near a perfect glass of wine. The whites were accompanied by the mache salad, dressed with a nice truffle vinaigrette, grilled maitake mushrooms and marinated bamboo shoot (the latter was savory and tasty and my favorite part of the dish). We also tried the wild mushroom Bi Bim Bop, a variant on the Korean classic and very tasty. We finished off the first course with the crispy pork belly — not my favorite of the three but still excellent.

As a “mid” course to finish up the whites we had the duck paparadelle pasta, which I thought was excellent. Throughout the meal at various times, Hurley came out and told us about the food, his philosophy for preparing food and running a kitchen, and I recommend talking to him. He’s led an interesting life and has the energy and passion for food you see in a rare few.

The “main” course was a family-style platter of the night’s special — grilled Kobe beef and grilled chicken. Both were superb, especially the chicken breast and the crispy end pieces of the Kobe. I served the Henri Bonneau 1990 Chateauneuf du Pape “Marie Beurrier,” which although a “second” cuvee for Bonneau (alongside the Cuvee Celestins), was a masterpiece. Deep, sweet, yet beefy and herbal, it reminded me strongly of the best bottles of 1981 Beaucastel — “Mourvedre cotton candy” was Parker’s descriptor for the latter, and although Bonneau uses very little besides Grenache in his blends, it fits. The man makes pure Grenache taste deep, dark, and complex like Mourvedre. Naturally, my stock of these wines is tiny, given availability and price, so this isn’t a wine I’ll taste again for quite awhile but I’m amazed at the experience. Marc opened a 1970 Jaboulet La Chapelle as well, but the bottle seemed to be a bit tired — clearly La Chapelle underneath a slight soy sauce layer.

We had a selection of desserts, but what stood out for me was one of the ice creams in my sampler: sweet chili ice cream. Just the faintest hint of a sriracha-like chili, which went well with the 1989 Von Hovel auslese that Bill brought.

In all, the evening was terrific — good friends, great food, and spectacular wines. I can’t recommend Coupage highly enough.

TransmissionLab Update

Yesterday I posted TransmissionLab version 1.4, a fairly major reworking of the model class core. I was dissatisfied with the way that RepastJ models, by default, seemed to tightly couple the main model class to all of the other classes I’d written for data collection, transmission rules, and population construction. My goal with TransmissionLab is really a framework for building models to study cultural communication and transmission, not just writing one giant model and bolting new stuff on.

A paper by Railsbeck et al., in the September 2006 issue of Simulation, is right on the money in saying that the original Objective-C version of the Swarm toolkit is a strong “framework,” as opposed to the “library” style of successor toolkits like RepastJ and MASON. Swarm definitely forced a style of organization onto your simulation models, via the concept of nested “swarms” of agents, observers, etc. I suspect this is much like Ascape, but the latter doesn’t seem to be an active development project any longer (at least given the website – leave a comment if this incorrect). Whereas Repast provides a ton of infrastructure but simulation models themselves seem to be fairly unstructured, as I read various examples and models folks have posted online.

Robert C. Dunnell’s graduate theory courses online!

While I was down in Long Beach recently, Carl Lipo and I talked about digitizing a series of video tapes made in the mid-1990’s of the last time that Robert C. Dunnell taught his graduate archaeological theory courses. Carl has found the time and some resources to start doing that, and the first couple of files (representing the first 5 or so class sessions) are now available in Windows Media format on his website. The classes are an amazing resource and learning experience. We have to apologize in advance for sound issues in lecture #2 — the colleague (who shall remain nameless) who was auditing the class and taping the lectures for us had some….technical issues.

Carl is digitizing all of Archy 497, the first of two quarters of archaeological theory. In 497, Dunnell focused on “formal theory” — concepts, key conceptual relationships, and the classification tools necessary for all explanation in archaeology. In 498, which likely will be the next digitizing project, Dunnell focuses on “explanatory” theory and the history of archaeological theory.

For those readers unfamiliar with R.C. Dunnell, he was my former academic advisor, longtime Chairman of the Department of Anthropology at the University of Washington, key initiator and driver of Darwinian approaches to explanation in archaeology, and scourge of generations of first-year graduate students. Dr. Dunnell retired in the mid-1990’s and now resides in the Southeastern United States, surrounded by Mississippian mounds, archaeological sites, and decent BBQ joints.

TransmissionLab Version 1.3 available

A small update to TransmissionLab is available, which enables proper batch-mode operation and simplifies the command line acrobatics required for batch mode operation. This version is numbered 1.3, and is available either in source code format (from the Google Code Subversion repository) or as a binary JAR file release. The latter are found under “Downloads“, and include a matched JAR file, a ZIP file with library dependencies, and an example batch-mode parameter file.

Both the batch-mode parameter file and library dependencies have slight differences from Version 1.2, so be sure to grab both otherwise you’ll encounter errors starting up a simulation. In particular, this release adds a dependency upon the Jakarta Commons CLI library for command-line parsing, since this isn’t a strong suit of the Repast libraries.

This version also adds one statistic to the OverallStatisticsRecorder data collection module. For each simulation run, we calculate the average number of agents who have traits (measured at each model tick) which are listed in the “top N” list of traits. In other words, if you’re working with a “top 40” list of song-analogues, this statistic measures the number of agents whose chosen trait is a song in the top 40, as opposed to a trait that wasn’t frequent enough to make the top 40 list. This statistic is thus paired analytically with the parameter for the size of the “top N” lists, and the combination of the two should be interesting to examine across a range of mutation rate and population size parameters.

On a related note, LiveScience has an article on the upcoming article by Alex Bentley, Carl Lipo, Harold Herzog, and Matthew Hahn. I recommend it for a somewhat popularized account of the main conclusions of their 2007 paper. Since much of what we’re doing with TransmissionLab at the moment is going further along the lines suggested by Bentley et al., and earlier Fraser Neiman, Carl Lipo, and myself, it’s a good clue to the kinds of phenomena we can explore purely assuming that choice among alternatives is statistically random or neutral.

Monica Goodling and “Pleading the Fifth” to Avoid Congressional Testimony

Over the last couple of days, we’ve heard that Monica Goodling, counsel and aide to Attorney General Gonzales, will “take the fifth” to avoid testifying in front on Congress, unlike Kyle Sampson (who testifies tomorrow). At first blush I didn’t think much about this, because it seemed like a generic stone-walling tactic by folks who are trying to protect Gonzales.

But looking at the actual language of the Fifth Amendment, I’m wondering whether Goodling can actually refuse to testify using the Fifth Amendment in this case:

No person shall be held to answer for a capital, or otherwise infamous crime, unless on a presentment or indictment of a Grand Jury….nor shall be compelled in any criminal case to be a witness against himself, nor be deprived of life, liberty, or property, without due process of law…

Of course, I’ve highlighted the relevant language above.

Goodling is likely relying upon everyone, including Congress, believing that because she is under oath, to give testimony when called would constitute “being a witness against herself.” This reading of the Fifth Amendment is exactly what causes this to be a potential controversy.

But one rule of constitutional construction, advocated especially by “strict constructionists,” is that we have to pay attention to the actual sentences and language itself — we don’t get to pick and choose which words, or clauses, or even sentences to which we pay attention. So when the language says, “nor shall be compelled in any criminal case to be a witness against himself,” doesn’t it seem like this language applies to….criminal cases? Is a Congressional investigation a “criminal case?”

Clearly not.

Could a criminal case arise out of misconduct in a Congressional investigation?

Clearly.

Does Goodling run the risk of incriminating herself by testifying honestly in front of Congress, if she’s engaged in wrongdoing?

Clearly.

Does Goodling run the risk of being charged with perjury if she falsely testifies in front of Congress, to hide that wrongdoing?

Clearly.

Does the Fifth Amendment apply to her during either a trial for some wrongdoing, or perjury for lying about wrongdoing?

Clearly.

Does the Fifth Amendment help her get out from between the horns of this dilemma, and get away scot-free?

Heck no, and Congressional leadership should not let that happen. If Monica Goodling needs to “make a deal” in advance for immunity, as happens seemingly every night on every crime drama on TV, in exchange for her testimony, let the deal-making begin. Because if there’s a deal, it’ll be a public deal, we’ll all know that she struck a deal to protect herself, we’ll hear her account of events, and can move on from there.

This is exactly the same deal we’d give anybody else whose testimony we needed to continue an investigation higher up the “food chain,” and no public servant is above the law, nor above the usual practice of law enforcement.

A humbling programming experience

I’m working a short script to post-process some simulation data from TransmissionLab, and because the scripting language I know best is Perl 5, I’ve written a short Perl program. I’ve been writing Perl since early 1994, and from about 1997 through 2005 I was fairly expert in the language, able to build and maintain fairly large, object-oriented systems that were actually readable by others. I even knew a fair bit about Perl internals, could link a C library to Perl via XS, and followed the (interminable) Perl 6 development process quite closely.

But I realized today that I’ve completely lost my fluency in the language. I’m struggling to re-activate the parts of my brain that understand deeply nested hash tables, objects, and other Perl-isms. I had to look at the perl man pages today to remember bits about foreach loops and the “defined” function. It’s coming back, and the program works, but it’s been slow. I guess that’s what you get for not using a language in several years.

Java is a terrific language for object-oriented development (as is C#, if I were working primarily in Windows), but it does insulate you from a lot of fairly-low level issues, in favor of giving you higher level expression. This little program I’m writing basically just looks for and reduces rows of data from experimental replicates and outputs the reduced data set with error terms. Simple descriptive statistics, plus a bit of data structure work. But without the Collections library and some of the Jakarta Commons stuff, I really had to think about how to do this.

Guess it points out how you need to keep using skills in order to keep them sharp.