Monday, May 20, 2024

German Grammar Checkers

I can speak some German. I'll never be fluent, but I can usually get by. Sadly, I make a lot of grammatical errors. It'd be nice to have a tool that could help me find and eliminate them. Syntax and grammar are structured things, seemingly tailor-made for algorithmic analysis. Surely there is software that can analyze my sentences, point out places where I've broken the rules, and tell me how to fix things!

There is. I recently tested more than a dozen programs and web sites that offer this service. The results were less impressive than I'd expected. On my (tiny and unrepresentative) set of sentences containing errors, most tools failed to find most of them. For errors that were found, it was common for the suggested fixes to be wrong. 

I found these sites to offer the most useful results:

  • LanguageTool describes itself as an AI-based spelling, style, and grammar checker. My sense is that the focus is on spelling and grammar, not style. I've found it to do a pretty good job, though there are errors it misses.
  • Scribbr bills itself simply as a grammar checker. It also produces good results, though a hair below those of LanguageTool.
  • DeepL Write claims that its AI approach yields "perfect spelling, grammar, and punctuation" and provides alternative phrasings that "sound fluent, professional, and natural." This means it may rewrite your text to not just eliminate mistakes, but also to make it sound different (presumably better). In my experience, it does a very good job of finding and eliminating errors, but it's sometimes difficult to determine whether it changed something because it's incorrect or because it just felt like rewording it.

In daily use, I generally feed my writing to both LanguageTool and Scribbr, because they're fast, and each sometimes finds mistakes the other misses. If I'm extra-motivated, I also turn to DeepL Write. I've found it to identify mistakes the others miss. I don't use DeepL Write all the time, because I find it annoying to have to tease out whether it changed something on the grounds of correctness or stylistic whim.

In addition to these sites, I also (very cursorily) tested the following systems. I found them to produce notably inferior results. I've listed them in order of decreasing performance, based on my (really limited) tests:

  • QuillBot is a sister company to Scribbr that presumably uses the same underlying technology. I found that the two systems generally give identical results. There are exceptions, however, and in those cases, I found that Scribbr did a better job.
  • Google Docs can be configured to check spelling and grammar as you type. In my testing, it delivered mediocre results.
  • Sapling also produces mediocre results, but it often says "Sign in to see premium edit." I didn't do that, so I can't comment on its premium edits.
  • Microsoft Word, like Google Docs, can be configured to check for spelling and grammar errors as you type. On my tests using Word from Microsoft 365, its coverage was inferior to Google's.
  • Rechtschreibprüfung24 and Korrekturen produced the same results in my testing, so it's possible that they use the same underlying (and unimpressive) checking engine.
  • TextGears and GermanCorrector also produced the same results on my tests, so it's possible that they share a checking engine. The results are similar enough to those from Rechtschreibprüfung24 and Korrekturen that it's conceivable that all four use the same underlying technology. In addition, OnlineKorrektor.de looks and acts identically to GermanCorrector, so it could be that there are two URLs for a single underlying checker.
  • Duden Mentor is the only system I tested that flags the errors it finds, but doesn't offer suggestions on how to fix them.
  • Online-Spellcheck couples its poor ability to find mistakes with a checking speed that is notably worse than its competitors. In addition, it replaces its input window with an output window, so you can't just paste new text in to check something different.
  • Studi-Kompass found none of the errors in my tests. That suggests that it wasn't working or that I was doing something wrong.

I must reiterate that my testing was very limited, so my conclusions are tenuous. If you know of more comprehensive comparisons of German grammar checkers, please share what you know in the comments!

My testing focused on incorrect articles, because that's a problem area for me. I used the following test sentences, where I've boldfaced the part of each sentence that's wrong. I realize that if you know German, you will recognize what's wrong without my help, and if you don't know German, you'll just see randomly boldfaced text, but I can't resist the Siren's call of the boldface error indicator.

  1. Das Tisch sieht gut aus.
  2. Ich gehe im Küche.
  3. Ich bin in die Küche.
  4. Ich will einen Ort finden, die schön aussieht.
  5. Beim Check-in haben wir die Größe des Lobbys bewundert.
  6. Schließlich habe ich mich entschlossen, dass ich einen Ort finden musste, der zwischen Singapur und den USA liegt (d.h., der auf dem Heimweg ist), und die gute Flugverbindungen hat.

I invented sentences 1-3 as representing common simple errors. Sentences 4-6 are from or are variations on things I've actually written.

I scored the systems' results as follows:

  • 2 points if the error was found.
  • 2 more points if only one fix was suggested and it was correct; 1 more point if more than one fix was suggested, but the correct one was among them.
  • -1 point if only incorrect fixes were suggested.
  • -1 point if rewrites were suggested beyond what was in error.  (This is designed to penalize DeepL Write for mixing error corrections and stylistic rewrites.)

If a system found the error in a test sentence and suggested the proper fix (and it didn't suggest anything else), it got the full four points. If it found the error, but it didn't suggest the proper fix, or if it muddied the water with rewrites unrelated to the error, it got between one and three points, depending on the details of what it did.

A perfect score for the set of six sentences would be 24 points. The best any system did was LanguageTool, which got 21. Scribbr was close behind at 20 points. DeepL Write got 19. Then there was a gap until QuillBot's 16 points. Google Docs scored 14, Sapling 13, and Microsoft Word 10. Rechtschreibprüfung24, Korrekturen, TextGears, and GermanCorrector/OnlineKorrektor.de clumped together with 6 points, which is one reason I suspect they may all be using the same checking technology. Duden Mentor also got 6, but its behavior is quite different from the other systems with that score. Online-Spellcheck got 5 points. Studi-Kompass got none, but, as I noted above, my guess is that either the system wasn't working or I was doing something wrong.

 

Tuesday, February 20, 2024

Tracking Travel

I like to travel. I've been a few places. I have a map on the wall with pins where I've been. Old School, but I started it before the Internet existed. I'd like to move it into the digital era. Looking into that led me to Most Traveled People (MTP) and NomadMania (NM). Both offer the ability to generate maps of places visited based on data you enter. I tried both. The NM data entry process was so slow and cumbersome, I gave up. MTP worked better. The map it produced showing the countries I'd been to makes me look pretty well traveled:

This is terribly misleading. Country-level granularity means that if you visit only a single place in a big country (e.g., the USA, Canada, Russia), the map makes it look like you visited the whole thing. Recognizing this, both MTP and NM break the world into much smaller regions, 1500 in the case of MTP and 1301 in the case of NM. My MTP region map is not just less impressive, it's frankly a little depressing for somebody who feels like he's been around:
The region-based approach is better than one based on countries, but as I was entering the data for my travels in the United States, I found that MTP treats a few states as multiple regions. California, for example, comprises four regions, and Texas three. (NM does the same thing.) Like most states, Oregon--my state--is a single region, and my state pride was wounded at the idea that Colorado is broken into east and west, and Georgia into north and south, yet all of Oregon is thrown into a single basket. Eastern and western Oregon differ greatly in terms of geography, climate, economics, politics, and culture. Having been to one of them doesn't mean you've been to the other in any meaningful way.

Breaking the world into regions and tracking who's been where is good for ranking people in terms of how geographically widespread their travels have been. Such rankings are the bread and butter of MTP and NM. I was surprised to find that I'm a comparative couch potato. Kayak tells me that since I started using it in 2011, I've traveled nearly 500 days, flown over a half million miles, and been in 17 time zones, yet those trips plus my pre-Kayak travels let me lay claim to barely 10% of MTP's 1500 regions. With the paltry 43 countries I've visited, I'm not even half way to qualifying for the Travelers' Century Club. From the perspective of competitive travel, I might as well not even have a passport.

Fortunately, I'm not out to engage in big-league travel competition. I just want a digital approach to tracking where I've been. For that purpose, I'm thinking a custom Google Map with digital push-pins is the way to go. It's basically the same thing I've got on my wall now, except in digital form.


Tuesday, November 28, 2023

Pumpkin Pie Cutters

Some years ago, I got it into my head that just as there are cookie cutters for cookies, there should be pie cutters for pumpkin pie. I bought the deepest tree-shaped cookie cutters I could find, thinking I could stack them and produce festive pie pieces for a holiday party. It didn't go as planned. I couldn't get the stacking to work, and the result of using just one cutter was kind of a disaster:
Nevertheless, proof of concept! 

I found that the KindredDesignsCA shop at Etsy offered custom-made 3D-printed cookie cutters. They agreed to make extra-deep cutters for me, one in the shape of a tree, another in the shape of a snowflake. The cutters worked great, except that once I'd extracted a piece of pumpkin pie with a cutter, the pie stuck inside the cutter. I had KindredDesignsCA make plungers so that I could push the pie out of the cutters:

I was so pleased with the result (shown at the top of this post), I started looking for new pie-cutter-shape ideas. For reasons not worth going into, I hit upon the idea of US states, and the next thing I knew, I was looking at pieces of pumpkin pie that looked like California, Texas, and Minnesota:
This year I decided that for Thanksgiving dinner, it would be nice to have pieces of pumpkin pie that looked like turkeys and pumpkins, so KindredDesignsCA again did their 3D magic for me:
I learned a few new things from these latest cutters. One was that it's a bad idea to try to get too detailed. Check out the well-defined beak in the turkey cutter below...

...and compare it to the poorly-defined or broken-off beaks in the pieces of turkey-shaped pie above. We're sculpting with pumpkin pie here, so just because you can produce a cutter with well-defined details doesn't mean you can get those details to be retained in the pieces of pie you cut.  

On the other hand, I was worried about the narrow strips of pie for the turkey's legs holding together, and they came out fine.

So far, I've employed these cutters only for pumpkin pie, but a friend and I were musing about what else they could be used for. Ideas include sponge cake, gingerbread, pancakes, hamburger patties, ice cream sandwiches, and gelatin. Plus cookies, of course. In the end, they're just overly-deep cookie cutters with plungers.

As far as I know, the only drawback to these cutters is that they require hand washing. The material used for the 3D printing has a comparatively low melting point, so if they were to be put into a dishwasher, you'd likely end up with pie cutter goo all over everything.



Saturday, November 4, 2023

Electric Cars are Still Luxury Goods :-(

More than three years ago, I blogged about EVs being luxury goods. Some 13 months ago, I showed that the Nissan Ariya--the only electric compact SUV with the basic features I demand (all-wheel drive, an openable moonroof, a 360-degree camera, and an EPA range of at least 235 miles)--came with a 52% price premium vis-a-vis a comparable gas-powered Nissan Rogue. That difference put the Ariya solidly in luxury car territory.

In the intervening year, the Ariya has gone from forthcoming to present on dealer lots, and just yesterday Volvo made it possible to configure and price the 2024 XC40 Recharge for the US market. It joins the Ariya in offering the fundamental features I insist on. The XC40 comes in both battery- and gasoline-powered versions, so it makes it easy to measure the cost of going electric.

The intervening year has also seen a big jump in interest rates:

Average 60-month new car loan rate (per https://bit.ly/3QUl7ER)

The concomitant reduction in demand for new cars has changed the market. I decided to recheck the EV price premium by again comparing the cost of the Nissan Rogue with the equivalently-equipped Nissan Ariya. This time I checked not just MSRPs, but also prices at cars.com. I did two searches at cars.com. The first was nationwide, i.e., for the best price I could find anywhere. The second was "near me," which means within about 100 miles of Portland, Oregon. I then repeated the experiment for the Volvo XC40 (gas-powered) and the Volvo XC40 Recharge (batteries). For the Nissans, my data are for the 2023 model year, because the 2024s aren't out yet. The Volvo data are for the 2024 model year.

This is what I found:

The Ariya continues to have an MSRP about 50% higher than the equivalently-equipped Rogue, and this doesn't change when looking for real cars within 100 miles of me. If I expand my search to the entire country, the price premium drops to 41%, but it still represents a difference of nearly $14,000. It's also an artificial cost differential, because the lowest-priced Rogue is in Arizona, while the cheapest Ariya is in Illinois.

Volvo is a premium brand, so MSRP pricing for its its Rogue equivalent, the XC40, starts 26% higher than the Nissan. Going electric from there (to the XC40 Recharge) demands a relatively modest 26% premium, but the result is 58% above the MSRP for the Rogue. Within the Volvo line, the premium to go electric is only 26%, but the price increase I care about--from an ICE-powered compact SUV of any make to a similarly-equipped EV of any make--is nearly 60%. That's far above the 25% I consider acceptable.

Lest you think I'm not taking government tax credits and rebates into account in pricing the Ariya and the XC40 Recharge, I actually am. Neither qualifies for the federal $7500 tax credit (which is fictional for most people, anyway), and my state's program for EV rebates stopped accepting applications months ago, because it ran out of money.

To me, the most interesting aspect of the pricing data is the smallness of the differential between the Ariya and the XC40 Recharge. Here's the table above with a line added showing the premium you pay for choosing Volvo over Nissan (i.e., the XC40 Recharge over the Ariya): 

 
Regardless of whether you look at MSRPs or prices at cars.com, the Volvo costs no more than 8% more than the Nissan. I've never been able to figure out what makes premium brands premium, but if Volvo has it and Nissan doesn't, I'd expect that to motivate many buyers to choose the XC40 Recharge over the Ariya. 

As for me, I'll continue to bide my time and hope that the EV industry eventually comes out with a compact SUV with the features I want at a price that's no more than about 25% beyond the cost of a comparable ICE vehicle.


Monday, August 21, 2023

Nav App Audio Conflict Resolution

You're tootling down the road using a navigation app connected to your car's infotainment system, and you're approaching a turn in the route. You've enabled voice navigation, so the app should use your car's audio to tell you what to do. But what if the audio system is already in use? What should happen if your nav app and another audio source want to use your car's audio system at the same time?

Different nav apps solve this UX design question in different ways. Some resolutions seem obvious. Some are clever. Some are so bad, it's hard to believe they made it into production.

The stakes can be high. If two apps want to use the audio system at the same time, typically only one will succeed. If it's the nav app, you might miss an AMBER Alert on the radio while you're next to a car matching the description of one with a kidnapped child. If it's not the nav app, you might miss a freeway exit and have to drive many miles before the next one

I compared the resolution of audio conflicts for Google Maps and Apple Maps under iOS 16.6 CarPlay on a 2019 Nissan Rogue. I don't know whether what I found is representative of other nav apps, other cars, or other phone-car interface systems (e.g., Android Auto).

The scenarios I checked were what happens when your nav app wants to speak and:

  • A streaming app is playing.
  • The car radio is playing.
  • A phone call is in progress.
  • You're talking to your phone.
  • Your phone is talking to you, e.g., Siri is responding to a query or command.

This is what I found.

Nav App vs. Streaming App

The streaming apps I tested (Pandora and Simple Radio) can be paused, which is characteristic of every streaming app I'm familiar with. It seems obvious to me that the proper behavior when a nav app wants to talk is to pause the streaming app, have the nav app talk, then resume the streaming app. Google sort of agrees, because that's what Google Maps does when the streaming app is Pandora. However, when the streaming app is Simple Radio, Google mutes it instead of pausing it. I don't know the reason for this difference.

Apple Maps has behavior that's not just different from Google's, it's incomprehensibly bad. When Apple Maps wants to talk while a streaming app is playing, it just starts talking. The streaming app continues to stream. If what's being streamed is spoken audio, you have two voices talking at the same time! Neither can be understood. It's hard to imagine a worse approach than this.

Apple would probably argue that I'm mischaracterizing what it does. Apple Maps employs audio ducking, whereby the volume of the streaming app is reduced when Apple Maps speaks. In concept, that's not unreasonable, but in my experiments, the ducking effect kicks in only when the volume of the streaming app crosses a loudness threshold. This threshold is far above my usual listening level. I had to go out of my way to elicit the ducking effect. When I did, I found that the volume of the ducked audio was still high enough to compete with Apple Maps' spoken directives.

 To summarize:

  • Google Maps: Pause stream, speak, resume stream.
  • Apple Maps: Speak while stream plays. At high stream volumes, duck stream while speaking.

I think Apple's approach shows promise, but its implementation needs considerable refinement. For me, both the threshold and ducked volumes are too high. I wish I could configure them. With that said, comments on the Internet make clear that many people listening to spoken audio (e.g., podcasts) would prefer pausing over ducking, anyway.

Nav App vs. Car Radio

Radio is a different kind of streaming beast, because it's not pausable. That requires that Google change its approach to audio conflicts. Not so for Apple, which sticks to its guns and issues navigation instructions on top of whatever's playing on the radio. If you happen to be listening to a talk show or the play-by-play of a sporting event, you've again got dueling voices, and there's a good chance you'll understand neither. I'm generally a fan of UX consistency, but Apple's here is of Emerson's foolish variety.

Google's approach--to mute the radio while Google Maps speaks--is a lot more reasonable. There's a tiny chance you'll miss something important (e.g., an AMBER Alert), but it's more likely you'll just miss a snippet of a song, commercial, or host's blather.

We thus have:

  • Google Maps: Mute radio, speak, un-mute radio.
  • Apple Maps: Speak while radio plays. At high radio volumes, duck radio while speaking.
Given their current implementations, Google Maps' behavior is vastly preferable to Apple Maps', but I think a blended approach would be an improvement to both. A choice between muting and configurable ducking would be very attractive.

Nav App vs. Phone Call

While you're on the phone. Google is polite enough not to wrest control of your car's speakers from the person you're talking to, but it has no Plan B. If you don't notice the visual navigational directive on the CarPlay screen, you're out of luck: Google Maps remains silent if you're in a phone call.

Apple doesn't need a Plan B, because its Plan A is so good. When Apple Maps wants to speak, but you're on the phone, it issues a chime--a subtle indication that it'd like to tell you something, but it can't. It's your cue to look at the CarPlay screen to see what you need to do. I find it works well.

So:

  • Google Maps: Don't speak or make any other sound.
  • Apple Maps: Don't speak, but issue a chime.

What I admire about Apple's approach is that it takes advantage of the fact that you have a screen; CarPlay requires it. A chime is enough to tell you to look at it, but it's not so intrusive that it interrupts the flow of a conversation. Well done, Apple!

Nav App vs. Dictation

If a nav app wants to speak while you're talking to your phone (e.g., issuing a command to Siri or dictating a text message), the audio conflict is between you and your nav app. Google Maps bursts in like an excited child who, unable to restrain itself, says its piece without regard for the fact that you're already talking. It's rude, not to mention a sure-fire mechanism for derailing your train of thought. 

Google Maps' interruption also aborts whatever you're dictating, e.g., the command you're issuing or the text you're dictating. This means you have to start over after the excited child has said its piece (and hope that it produces no further audio interruptions before you're finished).

Apple Maps' chime strategy would work well here, but Apple inexplicably adopts Google's speak-no-evil policy and remains silent. If you're dictating and Apple Maps wants to tell you something, the only way you'll know is by looking at the CarPlay screen. It's a bitter pill to swallow after the cleverness of Apple's phone call conflict resolution.

This is a case where both nav apps offer disappointing and frustrating behavior:

  • Google Maps: Interrupt and abort dictation, speak.
  • Apple Maps: Don't speak or make any other sound.

Nav App vs. Phone Voice Response

If your phone is talking to you when Apple Maps wants to speak, Apple employs the chime approach again. It works as well here as it did in the phone-call scenario, and it really raises the question of why Apple didn't apply it in the dictation case. You hear the chime, you know to look at your CarPlay screen, but you can continue to listen to your phone. It just works.

Google Maps treats your phone like a radio station. It mutes your phone's voice while it issues navigational instructions, then it turns your phone's voice on again. This is a curious decision. It means you could miss important information, such as a crucial part of an email message. I'd expect a nav app to treat a phone's voice like a streaming app and pause it while the nav app speaks. Perhaps there's no CarPlay API to pause Siri output...?

Recap:

  • Google Maps: Mute phone voice, speak, un-mute phone voice.
  • Apple Maps: Don't speak, but issue a chime.

If you have additional information on how nav apps handle audio conflicts, please let me know in the comments below.

Tuesday, July 25, 2023

Image Metadata: Thoughts after 4000+ Scans

 This is part 7 of my series on metadata for scanned pictures.

Part 1: The Scanned Image Metadata Project

Part 2: Standards, Guidelines, and ExifTool

Part 3: Dealing with Timestamps

Part 4: My Approach

Part 5: Viewing What I Wrote

Part 6: The Metadata Removal Problem

Part 7: Thoughts after 4000+ Scans (this post) 


 

In January 2022, I explained in parts 1-4 of this series why and how I store metadata in image files containing scans of, e.g., 35mm slides. I failed to mention that the strategy I came up with was largely based on merely thinking about the problem. At the time, I had made relatively few scans. However, I had contracted with Chris Harmon at Scan-Slides to scan several thousand old slides, so I had to specify how the metadata was to be handled. The approach I developed was what I directed Scan-Slides to do.

In the 18 months since then, I've received the image files from Chris, worked with the metadata, and added scans of a few other types of objects (e.g., drawings, notes, and letters). My perspective on image file metadata is now based not just on thinking, but also on experience. It's a good time to reflect on how well my metadata strategy has held up.

Metadata Entry

One thing has held up very well: the decision to have Scan-Slides do the initial metadata entry. It was clear from the get-go that this was going to be demanding work. It required deciphering hand-written descriptive information on slide trays and slide frames and, for each scan, entering the information into the appropriate metadata fields in the proper format. Several slide sets were from overseas trips, so many descriptions refer to locations in or people from places that look, well, foreign, e.g., "Church at Ste Mere Eglise" and "Sabine & Sylvie - Fahrradreise in Langeland." 

Copying the "when developed" date off slide frames proved unexpectedly challenging. Although many dates were clearly printed, some were debossed rather than printed, and those were harder to read. Some timestamps were mis-aligned with the slide frames they were printed on, leading to dates that were partially missing, e.g., the lower half of the text or its left or right side was not present. Some frames had no development date on them, but it was hard to distinguish that from faint timestamp ink or debossed dates that were only slightly indented. 

Chris handled description and development date issues with patience and equanimity. His accuracy was impressive. There were some errors, but that's to be expected when you're copying descriptions from thousands of slides involving places, people, and languages you don't know. I don't believe anyone would have done better. 

It's something of a miracle that he was willing to do the metadata entry at all.  Of the half-dozen slide scanning services I contacted, he was the only one who didn't reject it out of hand. There was an extra fee for the work, of course, but it was money well spent. It allowed me to devote my energies to aspects of the project that only I could do, e.g., correcting metadata errors and organizing image sets.

Four Metadata Fields

Using only four metadata fields (Description, When Taken, When Scanned, and Copyright) has proven sufficient for my needs. Populating those fields hasn't been burdensome. I feel like I got this part of metadata storage right.

The "When Taken" Problem

Using the Description metadata fields to store definitive information about when a picture was taken and storing an approximation of that in the "when taken" metadata fields has worked acceptably, but this remains a thorny issue. I still think it's prudent to follow convention and assign unknown timestamp components the earliest permissible value, but I continue to find it counterintuitive that images with less precise timestamps chronologically precede images with more accurate information. 

The vaguer the "when taken" data, the less satisfying the "use the earliest permissible timestamp values" approach. Google Photos--one of the most important programs I use with digital images--insists on sorting photo libraries chronologically, so there's no escaping the tyranny of "when taken" timestamps.  (Albums in Google Photos have sorting options other than chronological, but for a complete photo library, sort-by-date is the only choice.) For images lacking "when taken" metadata, Google Photos uses the file creation date. This is typically off by decades for scans of my slides and photos, so I've found that omitting "when taken" metadata is worse than putting in even a poor approximation. 

Overall, my "put the truth in the Description fields (where you know programs will ignore it) and an approximation in the 'when taken' fields (where you know programs will treat it as gospel)" approach is far from satisfying, but I don't know of a better one. If you do, please let me know.

Making Image File Metadata Standalone

I originally believed I was storing all image metadata in image files, but I was mistaken. Inadvertently, I stored some of the metadata in the file system. Scans of the slides from my 1976 trip to Iceland, for example, are in a directory named Iceland 1976, and the files are named Iceland 1976 001.jpg through Iceland 1976 146.jpg. The metadata in the image files indicate what's in the images and when the slides were taken, but they don't indicate that they were taken in Iceland or that they were from a single trip. That information is present only in the image file names and the fact that they share a common directory.

Going forward, I plan to include such overarching information about picture sets in each of the scans in the set. That will make the Description metadata in each image file longer and more cumbersome, but it will also make each image file self-contained. As things stand now, if a copy of Iceland 1976 095.jpg (shown) somehow got renamed and removed from the other files in the set (e.g., because it was sent to someone using iMessage), there would be no way to extract from the file that this picture was taken in Iceland and was part of the set of slides I took during that trip. My updated approach will rectify that.

Putting All Metadata into Image Files

The slides I had scanned were up to 70 years old. Over the decades, slides got moved from one collection to another. Some got mis-labeled or mis-filed. As I was reviewing the scans, I often found myself wondering whether a picture I was looking at was related to other pictures I'd had scanned. My parents may have made multiple trips to Crater Lake or Yosemite National Parks in the 1950s and 1960s, for example, but I'm not sure, because there's not enough descriptive information on the slides (or the boxes they were in) to know.

More than once I wished I could find a way to reconstitute the set of slides that came from a particular roll of film. I think this is often possible.  In addition to hints in the images themselves (e.g., what people are wearing, what's in the background, etc.), slide frames are made of different materials and often include slide numbers, the film type, and the name of the processing lab. Development date information, if present, is either printed or debossed, and, when printed, the ink has a particular color (typically black or red). All these things can be used to determine whether two potentially related slides are likely to have come from the same roll of film. 

I briefly went down the road of creating a spreadsheet summarizing these factors for various sets of slides, but it's a gargantuan job. It didn't take long for me to stop, slap myself, and reiterate that I was working on family photos, not cataloging historic imagery for scholarly research. Nevertheless, I think it would be nice to have more metadata about slide frames in the image files. Whether Chris (or anybody else) would agree to enter such information as part of the scanning process, I don't know.

Dealing with Google Photos

Words cannot describe how useful I find Google Photos' search capability. Despite the effort I've invested adding descriptive metadata to scanned image files and the time I've taken to help Google Photos (GP) identify faces of family and friends, it's not uncommon for me to search for images with visual characteristics that are neither faces nor described in the metadata, e.g., "green house" or "wall sconce". Such searches turn up what I'm looking for remarkably often. The alternative--browsing my library's tens of thousands of photos--is impractical. This makes Google Photos indispensable. That has some implications.

When Scan-Slides started delivering image files, I plopped them into a directory on my Windows machine where GP automatically uploads new pictures. I then went about the task of reviewing and, in some cases, revising the metadata in the scans. In many cases, this involved adjusting the "when taken" metadata, either because the information I'd given Scan-Slides was incorrect (e.g., mis-labeled slide trays) or because Scan-Slides had made an error when entering the data. I also revised Description information to make it more comprehensive (e.g., adding names of people who hadn't been mentioned on the slide frames) or to impose consistency in wordings, etc. The work was iterative, and I often used batch tools to edit many files at once.

Unbeknownst to me, GP uploaded each revised version of each image file I saved. And why not? Two image files differing only in metadata are different image files! By the time I realized what was happening, GP had dutifully uploaded as many as a half dozen versions of my scans. I wanted only the most recent version in each set of replicates, but Google offers virtually no tools for identifying identical images with different metadata. The fact that I'd often changed the "when taken" metadata during my revisions and that GP always sorts photo libraries chronologically meant that different versions of the same image were often nowhere near one another.

The lesson, to borrow a term from Buffy the Vampire Slayer, is that iteratively revising image file metadata and having Google Photos automatically upload new image files are un-mixy things. 

I told GP to stop monitoring the directory where I put my scans, spent great chunks of time eradicating the thousands of scan files GP had uploaded, and resolved to manually upload my scans only after I was sure the metadata they contained was stable. A side-effect was that I could no longer rely on GP acting as an automatic online backup mechanism for my image files, but since I have a separate cloud-based backup system in place, that didn't concern me.

Multiple-Sided Objects

Scannable objects have two sides: the front and the back. For photographs, it often makes sense to scan both sides, because information about a photo is commonly written on the back. (If scans of slides included not just the film, but also the slide frames, it would make sense to scan both sides, thus providing a digital record of slide numbers, film types, and processing labs, etc., that I mentioned above.)

Scanning both sides of a two-sided object (e.g., the front and back of a photograph) yields two image files. That's a problem. Separate image files can get, well, separated, in which case you could find yourself with a scan of only one side of an object. Preventing this requires finding a way to merge multiple images together. 

I say multiple images, because there might be more than two. Consider a letter consisting of two pieces of paper, each of which has writing on both sides. A scan of the complete letter requires four images: one for each side of each piece of paper. If the letter has an envelope, and if both sides of it are also scanned, the single letter object (i.e., the letter plus its envelope) would yield six different images.

Some file formats support storing more than one "page" (i.e., image) in a file. TIFF is probably the most common. Unfortunately, TIFFs, even when compressed, are much larger than the corresponding JPGs--about three times larger, in my experiments. More importantly, TIFF isn't among the file formats supported by Google Photos. When multi-page TIFFs are uploaded to GP, GP displays only the first page. For me, that's a deal breaker.

It's natural to consider PDF, but PDF isn't an image file format, so it doesn't offer image file metadata fields. In addition, PDF isn't supported by GP. Attempts to upload PDFs to Google Photos fail.

My approach to the multiple-images-in-one-file problem is to address only double-sided objects (e.g., photographs, postcards, etc.). I handle them by digitally putting the front and back scans side by side and saving the result as a new image file (as at right). A command-line program, ImageMagick, makes this easy. The metadata for the new file is copied from the first of the files that are appended, i.e., from the image for the front of the photograph.

I haven't yet had to deal with objects that require more than two scans, e.g., the hypothetical four-page letter and its envelope. My current thought is that the proper treatment for those is to ignore Google Photos and just use PDF. I'm guessing that most such objects will be largely textual (as would typically be the case for letters), in which case OCR and text searches will be more important than image file metadata and image searches.

Monday, June 19, 2023

The Convertible EV Search: Four Years Later

Today marks exactly four years since my search for a little EV convertible ended with my purchasing a gas-powered Miata. If I were starting my search today, it'd end the same way. For the American market in 2019, there were no electric models to choose from. There are none to choose from now.

It's a little different in Europe. (I don't know about the rest of the world.) Even in 2019, the Smart Fortwo Cabrio was available, though I stand by my assessment of its range as laughable (57 miles per the EPA in 2019, 80 miles per the manufacturer now). Nearly as laughable is the range of the other EV ragtop in Europe, the limited-edition electric MINI Convertible. Its WLTP range is 125 miles, which roughly equates to an EPA rating of 102 miles. The MINI complements this paltry range with an eye-popping price: some $65,000 to start. That kind of money buys two Miatas and leaves enough cash in your pocket for a very nice vacation pretty much anywhere in the world. 

The Miata's range is 357 miles, so the electric MINI convertible offers less than a third of the range at more than twice the price.

Although there are no EV convertibles for the American market right now, a few have been announced. They make the MINI look like a bargain. Maserati and Fisker have announced the GranCabrio Folgore and Ronin, respectively. The former is purported to ship this year, the latter in 2024. Starting prices are around $200,000. (Update 4 August 2023: The Ronin web site lists a starting price of $385,000, and, per this article, it won't ship until the end of 2025.) The Polestar 6 costs the same, but it's not slated to ship until 2026.

Those for whom the prices above are too low and the details too specific may prefer to focus on the Genesis X Convertible, a car that's been announced for production, but not for when or for how much. Pricing is speculated to run somewhere between $200,000 and $300,000.

The days of the affordable little EV convertible don't look to arrive anytime soon.

Friday, May 26, 2023

The Compact SUV EV Search: Four Years Later

On this day four years ago, my search for an electric compact SUV ended with the purchase of a gasoline-powered car.  During the time of my search, there was only one compact SUV EV available. It was prohibitively expensive and lacked features I consider essential. Two years later, more affordable options had come to market, but nothing offered the basic features I want: all-wheel drive, a surround-view camera, an openable moonroof, and an EPA range of at least 235 miles. Now that two more years have gone by, it's time to take a look at the compact SUV EV landscape again. 

It's still desolate.

The good news is that we've finally reached a state where one vehicle--exactly one (out of the nearly 30 compact SUV EVs available)--offers the basic features I want. It's the Nissan Ariya. Unfortunately, at more than 50% more expensive than its gasoline-powered equivalent (a premium of close to $20,000), it's too much money. 

Two other cars almost fulfill my basic feature requirements, though not my affordability criterion. The Mercedes EQB 300 has AWD, the moonroof, and the surround-view camera, but its EPA range, per the car's window sticker, is only 232 miles. The range is surprising, because the range for the 2022 model at fueleconomy.gov is 243 miles. Why the 2023 model has a lesser range than the 2022 model, I don't know, but the number on the sticker is the number on the sticker. (Fueleconomy.gov has no information for the 2023 model.)

The other nearly feature-complete car is the AWD version of the 2024 Volvo XC40 Recharge. The 2023 model already checked all the boxes except the required range, but Volvo recently announced that the 2024 model's range would be around 254 miles. That's encouraging, but it's currently a car on paper only. Pricing hasn't been announced, and it can't yet be ordered.

Even when it exists, it's unlikely to change things for me. Assuming the 2024 Volvo XC40 Recharge is priced similarly to the 2023 version, both it and the Mercedes EQB 300 will have MSRPs pushing or exceeding sixty grand. That's even more than the Nissan Ariya. 

None of these cars qualifies for the $7500 federal tax credit (which I recently realized is less attainable than the EV media generally acknowledges).

Four years after I threw up my hands in frustration, abandoned the idea of buying an EV, and purchased a gas-burning automobile instead, I'll have gone from having zero EVs to choose from to having one. Pricing remains firmly at the luxury level. The acceptably-equipped and reasonably-priced compact SUV EV I long for continues to exist only in my imagination.

The slow progress of the last four years is disheartening. I've decided to significantly reduce how closely I monitor EV developments. For years, I've followed the field closely, eagerly reading articles about new and coming vehicles. I'm going to stop doing that. From now on, I'll just check every few months to see if anything has become available that offers the features I care about at a price I consider reasonable. There's a school of thought that the IRA's battery subsidy provisions will lead to a radical reduction in EV pricing. We'll see.

Monday, May 1, 2023

About that $7,500 Federal EV Tax Credit...

I read a lot of articles about EVs (electric vehicles). The writers of these articles commonly assume that if a car  qualifies for the full $7,500 federal tax credit, the effective purchase price drops by that amount. A recent post by Inside EVs is typical:

The 2023 Volkswagen ID.4 is eligible for the full $7,500 federal tax credit. ... The 62-kWh battery version starts at an MSRP of $38,995 (+$1,295) DST, which effectively means $32,790. The 82-kWh battery version starts at an MSRP of $43,995 and is effectively priced at $37,790, while the AWD versions are $4,000 more expensive (effectively from $41,590).

Notice how the "effective" prices are $7,500 less than the MSRP plus the DST (destination charge). This is terribly misleading. To get the full $7,500, you have to owe at least $7,500 in federal income tax for the year you buy the car. If you don't, you get less than $7,500. The less you make, the less you get.

I used the SmartAsset Federal Income Tax Calculator to create a quick-and-dirty mapping from income to federal tax liability (and hence EV tax credit). These data are for a two-person married household taking the standard deduction and making no 401(k) or IRA contributions: 

You can see that for taxpayers fitting this profile and making under about $95,000, the $7,500 EV tax credit is a myth. For a couple making $55,000, the credit is less than half the full amount. For a couple getting by on $25,000, there's no tax credit at all.

$95,000 is higher than the median household income in 2021 for every state (as well as the District of Columbia) except Maryland. (The source for this appears to be the US Census Bureau, but I found the data at Statista. Credit Karma shows identical numbers.) I'm comparing apples and oranges a little by using a two-person married household for the $95,000 and a household of any type and size for the median incomes, but these are the values that are easy to find. For a broad-stroke picture, I think they suffice. If you have better statistics, please let me know.

The broad-stroke picture is that in every state except Maryland, the majority of two-person households would probably fail to qualify for the full $7,500 federal EV tax credit. Some articles on EVs mention that the full credit is available only to those who owe at least that much in federal income tax, but they generally make it sound like an edge case. The data above suggest that failing to qualify for the full credit is the rule, not the exception.

For completeness's sake, I'll note that the tax credit goes away for high-income taxpayers. For a married couple filing jointly, the credit vanishes when the couple's modified adjust gross income hits $300,000.


Wednesday, April 26, 2023

How I Learned to Stop Worrying and Love Free Digital Stuff

In October 2015, I found myself behind the wheel of a rental car in Bucharest, Romania. I was frantic, lost, and getting loster. My plan to drive from the rental location to my nearby hotel had gone south the instant I left the parking lot. The sea of cars prevented me from making the turns I wanted, and in less than a minute, I was lost, helplessly flowing with traffic in a city I'd never been in. There was nowhere to pull over, no place to stop and check a map. Vehicles were everywhere, including on the sidewalk. I flung my iPhone at my companion and told him to bring up the Maps app. Having never used a smart phone, he had no idea how. Too busy driving to help him, I grabbed the phone and barked "navigate to Andrei's parents' house!" into it. 

It was an act of desperation. It was my first smart phone, and I'd had it only a month. (I was late to the party.) I still had trouble remembering how to hold it right side up. But I knew it could navigate, and I knew it could respond to voice commands. I also knew that if we didn't break free from the traffic in which we were drifting, things were going to get worse. I held my breath and waited.

"Beginning navigation to Andrei's parents' house," the iPhone soothingly intoned, followed by the turn-by-turn directions nav apps are known for. We arrived a short time later, cool, calm, collected, and both convinced that smart phones were a lot more than we'd given them credit for. 

Speech recognition on smart phones is free. You don't pay extra for it. Navigation apps are free, too. You get a lot for what you don't pay. Speech recognition works in multiple languages as well as in terrible audio environments. I've dictated messages in my American-accented German on loud city streets, and my iPhone has gotten it about as right as it does my native English in a quiet room. That's right enough that I take its correctness for granted and get annoyed when it makes mistakes. Nav apps work worldwide. I would happily pay monthly fees for these services. I'm pleased that I don't have to.

I'm equally pleased that I don't have to pay for Google's internet search services. They don't find all the needles I'm looking for in the world's digital haystacks, but across text, images, and videos, they find the vast majority of them. Where Google comes up short, I can generally rely on YouTube (for videos) and Yandex (for images). Those services are free, too.

Another thing I'd pay for is Google Photos' search capabilities. They have an uncanny ability to help me find the photographs (and videos) I'm looking for out of the tens of thousands I have stored on their servers. There's no charge for my being able to request "Scott at Uluru" and have Google pluck this from my mountain of images:

This isn't my best look, but Uluru looks good, and Google's ability to locate images in this way turns a heap of random snapshots and videos into a useful collection of visual souvenirs. In my experience, nothing from any other vendor can touch this ability. It's remarkable that Google doesn't charge for it.

Google doesn't charge for Google Earth, either. It's a mainstay of my trip-planning tools, making it easy for me to get 3D views of places I might want to visit, drop pins at important sites, measure distances between locations (both by road and as the crow flies), and much more. Like speech recognition, worldwide navigation, internet search, photo search, and automatic language translation (which I haven't mentioned, but use nearly every day), it's valuable enough that I'd pay money for it. It's amazing that I don't have to.

The standard rejoinder is that if you're not paying for a product, it's because you are the product. The currency with which you're bought and sold is data. Your travels on the internet are tracked via cookies (among other things), and your movements in the real world are tracked via the GPS data on your phone (among other things). Your purchases are monitored, the songs you listen to are noted, the routes you drive are logged, and precise records are kept of how much time you spend where, in both the real and virtual worlds. Detailed dossiers on you are sold to advertisers, who use this personal information to aim advertisements at you with laser precision. Such is the price of free, we're told.  

I believe it. But after years of reading about it and thinking about it and trying to decide if I should be outraged, I've decided that if I can trade my data for speech recognition, language translation, universal navigation, comprehensive internet search, personalized photo search, travel planning tools like Google Earth, and myriad other digital products and services, it's one of the best deals I'll ever get. I'm absolutely in.

There are two reasons for this. First, the corporate villains of the digital world are hardly breaking new ground in profiling me for advertising purposes. They may be able to put together a higher-resolution view of my life than companies that don't follow my movements through the internet, but my life has become a pretty open book without them. 

You want to know who my parents are, where I live, the amount of my mortgage, whether I'm married, the kind of cars I own, whether I've been arrested, my political affiliation, or how often I vote? It's all part of the public record.

You want to know what I buy and what I eat? Ask my local grocery stores. They made me choose between joining their loyalty clubs (thus enabling tracking my purchases) or paying up to double their "special member pricing." I joined. Not that their clubs are necessary. I usually pay by credit card, and it wouldn't take a genius to figure out that the purchases made with my card were probably made by me.

Credit card companies have known for decades where and when I spend money. I use cash less and less, so credit card companies know more about me than ever. The briefest of glances at my transactions will reveal that I like to travel and I eat out a lot. A slightly longer look will reveal my travel destinations, the kinds of restaurants I patronize, the times of day I buy meals, and the full complement of stores I frequent. Throw some machine learning at that data, and I'm surely a pretty transparent advertising target.

My cellular carrier tracks the movements of my phone, roadway cameras track the movements of my car's license plate, smart doorbells and Teslas in Sentry Mode record me as I walk my dog, and security cameras monitor me on public transit and in spaces public and private. Facial recognition software means there's no hiding in a crowd. 

I use social media very little, so my direct footprint there is tiny, but my family and friends are more engaged. They tag me in their photos and mention me in their posts. I almost never log in to Facebook or Instagram, but the borg that is Meta can probably describe me better than I can describe myself. (I confess to being a regular WhatsApp user.)

The world is awash in data that is or could easily be linked to me. Some of it stems from the Internet, but much does not. It was nearly a quarter century ago (in January 1999) that Scott McNealy famously remarked, "“You have zero privacy… Get over it!” I have.

The second reason I don't mind trading my data for complimentary speech recognition, worldwide navigation, and internet search, etc, is that the bargain is far from Faustian. If the only downside to the deal is that I'll be exposed to advertising that's more likely to be interesting to me, how is that bad? It'd be one thing if I was unwittingly signing up for more ads, but if I'm going to be accosted by a fixed number of ads regardless, why would I prefer irrelevant ones over ads more likely to address things I care about?

Advertising is intrusive. I subject myself to as little as possible. I use ad blockers in my web browsers, and I get most of my video from ad-free subscription services. (For music, I'm a throwback and listen to terrestrial FM radio (!), but this is generally in the car, and when an ad comes on, I switch to a different station or hit the mute button.) The relatively few ads that get to me are the ones I can't find a way to quash. Why shouldn't I want to maximize the chances that I'm interested in what they have to say? To this end, I've actually enabled Google's "Personalized ads" toggle. Google's going to collect as much data about me as it can, no matter what I do. For the ads that get past my defenses, it might as well put in the extra effort to increase the likelihood I'll find some merit in them.

In sum, (1) no matter what I do, advertisers will have access to detailed profiles of me, and (2) custom-tailored ads are preferable to generic ones. From my perspective, the price of free--my incremental cost for free speech recognition, free worldwide navigation, free comprehensive internet search, free personalized image search, free language translation, and free lots-of-other-stuff--is nothing. All that free stuff really is free, at least to me, because advertisers harvest my data either way.

I'm uneasy about two things. First, the most commonly mentioned downside to extensive personal tracking is targeted advertising, but that's not the only risk. Profiles of what I do and where I go could be used for stalking, blackmail, extortion, digital impersonation, and governmental abuse. Personalized ads are the smile of the beast. It also has teeth.

Second, while I'm comfortable with my ability to resist personalized ads for products and services, I'm less sanguine about my ability to recognize and disregard political ads designed to influence me. If you engage an army of psychologists to train AI to read personal profiles and identify hot buttons, I've no doubt it'd find mine. I believe my lack of engagement with social media largely shields me from such attacks, but I recognize that this may simply be hubris on my part. 

It's possible to imagine worlds where personal data isn't automatically collected, packaged, sold, and exploited. Things don't have to be the way they are. There are people working to bring such worlds into existence. I'm not optimistic about their success, however, and at any rate, the world I live in is the one we have now. As long as that's the case, I'll happily take advantage of the free things my data is paying for.

Wednesday, March 15, 2023

Pergola Dreams

For years, I've dreamed of a vine-covered pergola ablaze with flowers, something like this (snatched from here):

In 2020, I decided to look into making this a reality.

Pergolas are not hard to build. As Dave Berry might say, you can throw a pile of lumber on the ground, and it will form a pergola. You just put up the posts, attach the beams, put the rafters on top of those, and cap the whole shebang with stringers (aka runners or perlins). It's been said that a pergola may be the ideal DIY project for a long weekend.

That's assuming you build it the conventional way. The conventional way is not really me. I dislike seeing fasteners, so my pergola dreams lack visible hardware (e.g., bolts or screws). I also dislike the stacked look of rafters atop beams and stringers atop rafters. I'd rather have it look like all the roof components are pretty much at the same level. And I don't care for the look you get when you look up through the roof of a typical pergola. All those rectangles! I want something more visually interesting during the years before the vines have grown to cover the structure.

After a few iterations, I came up with a design for a "floating pergola," whereby the roof sort of looks like it's floating above the posts and rafters (at least from the front). Cross-lapping the beams makes it look like they pass through one another:

I also came up with ideas for hiding the hardware holding the structure together, e.g., putting vertical metal rods in the posts which would fit into holes bored in the underside of the beams. Whether that would prove structurally sound, I can't say, but it would hide the hardware, and it would allow the top of the pergola to sit on the posts and be held in place by gravity.

The more creative aspect of the project was figuring out what the roof would look like from below and above. The view from above is relevant, because the second floor of our house looks down on the site for the pergola. 

My design above has the beams and rafters forming this pattern:

The question is what to do with the stringers. From the pergola's perspective, they're just decorative, but once the vines have grown to cover the structure, the stringers will need to hold their own under the weight of the vines lying on top of them. They thus need to be both visually interesting and relatively sturdy.

I mocked up a number of possibilities:


Design 5 was my ultimate choice:
With these plans in hand, I approached a number of local contractors.  I figured I'd get a few bids, choose a contractor, and watch the sawdust fly.

That's not what happened. Three contractors never responded to my email inquiry about the project. Three came and talked with me, looked at my plans, promised to send a bid, then ghosted me. An additional three didn't do pergolas, didn't work in my area, or weren't accepting new jobs. One wanted payment of several thousand dollars to develop a 3D model of the structure to be built before issuing a bid. One offered a time-and-materials bid that he estimated would come to about $11,500, but it made no mention of the footings for the posts. The twelfth contractor offered only a "very rough" estimate of $21,500. Nobody was willing to offer a fixed-priced bid for the work. 

I was astonished. I knew that my design was unconventional, but it's still just carpentry. I was working on this during the first year of the pandemic, so perhaps that played a role, but oh-for-twelve is still a pretty dismal record.

I briefly considered doing the construction myself, but I just don't want to. It's a lot of work, and I'd rather have a professional do it. Anyway, it eventually dawned on me that keeping a white pergola that's covered with vines looking good means coming up with a way to clean or paint it, and I don't know how to approach that task. Look again at the pergola at the top of this post. How do they keep that gleaming white structure gleaming white? However they do it, it's probably time-consuming, and what I want is a picture-perfect vine-covered pergola without any fussy maintenance. I'm guessing those don't exist.

 

Friday, January 20, 2023

The Beardsley Salome Dinnerware Project, Part 2: Production

Part 1 of my report on this project is here

Just as creation of the artwork for my Beardsley dinnerware took longer and was more difficult than I'd anticipated, production of the dishes was also unexpectedly challenging. Without the extraordinary commitment of Enduring Images (the company who made the dishware), I'd still be looking at  mockups on a computer instead of dinnerware on a table.

Let's start with how things end. Here's a photo of one of the dinner plates I had made, along with a smaller plate and a bowl:

Here are the serving dishes:
And here are more of the bowls, because they are the only component of the set that uses color:

The collection is nice, but it's not as nice as I'd hoped. The pieces look pretty good from a distance, but the closer you look, the more you notice things that aren't as they should be. Well, the closer I look, the more things I notice that aren't as they should be. I spent several months staring at zoomed-in copies of Beardsley's drawings and at dinnerware mock-ups using those drawings. I notice some things other people wouldn't.

But I'm getting ahead of myself. Having seen how production ended, let's shift to how it began.

In September 2021, shortly after starting the project, Enduring Images (EI) ran tests to ensure that the blanks I'd selected were compatible with their production technology. As a test image, I selected a drawing from Beardsley's work on Le Morte Darthur, because I felt that its areas of solid color as well as its use of fine lines was representative of the images I'd want for my dinnerware. At that time, I had not yet decided to use only artwork from Salome.

When I got a test plate back, I was surprised to find that the edge of the decal could be both seen and felt. It wasn't obvious, but once you'd noticed it, it was hard to ignore. Patrick, my contact in Production at EI, explained that this was a flux shadow. I didn't like it, so Patrick outlined three approaches to eliminating it. I'll refer to these approaches as Techniques A, B, and C, but for those who must open every box to see what's inside, A is on-glazing with a flux topcoat and full-coverage decals, B is on-glazing with a non-flux topcoat, and C is in-glazing.

Each of these approaches has limitations. Technique A works for relatively flat pieces, but it can't be used for bowls. Technique B tends to yield a matte finish, rather than a glossy one. Technique C has a poor track record. Patrick had found that it was rarely successful.

Technique A was a variation on what had been done for the test plate. I was confident it would leave no flux shadow on the relatively flat pieces it was applicable to. I had EI run additional test plates using Techniques B and C. As promised, neither produced a flux shadow. Surprisingly, Technique B produced glossy results. It also yielded a more intense black than Technique C. 

I recommended we use Technique B for the dinnerware. Unlike Technique A, it was applicable to all dish shapes, and it yielded deeper blacks than Technique C. I then threw myself into production of the artwork, a task that ended up stretching over the next eight months. For details, see Part 1 of this report.

Three months later, we ran some samples to test the colors for my bowls. I reminded Patrick of the importance of avoiding a flux shadow. He told me he'd selected Technique C for just that reason. He didn't say why he'd chosen Technique C over Technique B, and I didn't ask.

After five more months (i.e., at the end of July 2022), I submitted final artwork for the full dinnerware set. Patrick started work on a dinner plate as a pre-production test. We expected smooth sailing. The seas were not cooperative. After three failed trials, Patrick, taking into account the tribulations I'd endured with the artwork, began to talk of a Beardsley curse. After several additional failures, he concluded that Technique C was not going to work for the project.

The stumbling block was the large swaths of solid black, especially on the rims. Patrick had been unable to find a way to fire the plates such that these areas emerged a uniform color and texture. The picture below is an example of his results. The black in the middle of the plate is deeper than that on the rim, there is cracking in the color at the rim extents, and striations are present at the rim edge in the upper left.

We retreated to Technique A. It dodged the problems of Technique C, but a new issue became apparent. Some areas that were supposed to be white were coming out grey. In the image below, compare the original artwork (above) with the image on the plate (below). On the plate, there's a grey haze around the peacock and the headdress that is not present in Beardsley's drawing:

Patrick explained that the black areas tend to bleed a little, and there's no practical way to eliminate it.

At this point, I'd been working with EI for more than 13 months, and pre-production testing was in its fifth month. I was having only 41 pieces produced, so the business case for continuing to work with me had long since evaporated. EI as a company and Patrick as a individual had invested far more time and energy in the project than could ever be justified, and they had done it with a cheery attitude and an earnest commitment to the project's success. I would have liked to find a way to eliminate the flux shadows on the bowls that I knew Technique A would leave behind, and I would have liked to play around with techniques to reduce the bleeding giving rise to grey areas, but EI hadn't signed up for what had become a research project. You don't ask people who've already gone above and beyond to go  higher and further. I told Patrick that testing was over and it was time to make the set.

The dishes showed up about two months later. Creating them involved printing, hand-placing, and firing 82 decals, one for the top of each piece and one for the bottom. Most decals were unique. Bottom decals had to be matched with their top-of-the-plate partner and had to be oriented the same way. Opportunities for errors were rife. I was pleased to see that only one decal had been placed incorrectly. 

I noticed some significant loss of detail in fine white lines present in the artwork. Compare the artwork below (left) with its appearance on a plate (right):

I also saw that the black rims were not as uniform in color as on the test plates we'd run. Contrast the mottled appearance of the production plate (left) with the more uniformly black test plate (right):

Patrick explained that in an effort to minimize the bleeding of blacks into adjacent white areas, he'd tinkered a bit with the production process. That had resulted in some loss of fine details as well as a reduction in the density of the blacks.

During the months between initial firing tests in autumn 2021 and pre-production testing at summer's end 2022, Technique B had somehow dropped off the radar. I hadn't forgotten it, however. When Patrick remade the plate with the mis-placed decal, I had him make a second copy of the plate using Technique B. That allowed me to compare the results of Techniques A and B on a piece of my dinnerware.

It was an interesting exercise. The rim color for Technique B (right) was much better than for Technique A (left):

That photo was taken under unusually strong light, and it exaggerates the difference. Even under normal lighting, however, it's clear that Technique A's black is mottled, while Technique B's is nearly uniform.

On the other hand, Technique A (left) retained drawing details better than Technique B (right):

Technique B yields better solid blacks, then, but it leads to a loss of detail beyond that which Technique A already incurs.

That's how the story ends. I finally have a set of dinnerware based on Aubrey Beardsley's drawings, something I'd yearned for since 1989. But it's not the set I'd envisioned. The areas that should be solid black are more dark grey. If you look closely, or if you see them under strong light, you see that the color is somewhat mottled. The images lack details present in Beardsley's drawings, and some areas that should be white have a grey haze to them. The set is still nice. To the casual observer, it's very nice. It's just not as nice as I'd hoped. 

I suspect it would be possible to do better, but getting there--finding the right combination of toner, topcoat, kiln temperature, firing time, and who-knows-what-else--would be time consuming and expensive. It'd be a research project--even more so than this endeavor ultimately became. That's not in the cards.

I'm lucky I got this far. Without Enduring Images' dedication to seeing this project through, I wouldn't have. I remain grateful to them and to Patrick for their exemplary patience, cooperation, and assistance.