Saturday, September 7, 2019

A C++ Hall of Fame

Rock & roll has a hall of fame. So do toys. Fresh water fishing and towing each have one, and there's one for pretty much every kind of sport. I think C++ should have one, too.

CppCon, which starts in about a week, provides a natural setting for discussions about a C++ Hall of Fame. To get things rolling, I present the following proposal, on which I welcome comments. I won't be at CppCon, but I'll send a final version of the proposal to The C++ Foundation before the conference begins. Let me know what you think! (If there are a lot of comments, don't be surprised if I don't respond to each one.)

Proposal for a C++ Hall of Fame ("HoF")


The success of C++ is based on the efforts of many contributors, but a few have done especially significant work. A C++ HoF would allow the C++ community to formally recognize and honor contributors whose efforts have been unusually important.


The HoF will be run by a Steering Committee, whose size and makeup will be determined by the Standard C++ Foundation. Duties of the Steering Committee will include overseeing the nomination, selection, and induction of HoF members, as well as maintaining the HoF itself.

The Steering Committee will establish a Selection Committee, whose role will be to solicit, accept, and evaluate nominations for HoF membership. The Selection Committee will determine who is included in the HoF.

Membership in the Steering and Selection committees need not be disjoint. Membership may even be the same, but it may be preferable for some Steering Committee members to work only on HoF activities unrelated to nomination or selection of new HoF members.

Eligibility for HoF Membership

HoF eligibility will be determined by the Steering Committee.  I suggest that, initially, only people (living or dead) or teams (i.e., groups of collaborating people) are eligible for membership in the HoF. In the future, eligibility can be broadened (e.g., to permit companies and organizations), but I think it’s reasonable to begin with a people-only HoF.

To reduce conflicts of interest, no one involved in HoF administration is eligible to be selected for membership in the HoF. However, existing HoF members may serve as administrators, and former administrators are eligible for the HoF.


The nomination process will be determined by the Selection Committee.  I suggest an initial “anybody can nominate anybody” policy and a generous nomination period. If this proves unwieldy, more restrictive policies can be adopted.


Each year, the Selection Committee will choose no more than five nominees for inclusion in the HoF. Selection is an honor. Choosing too many new members would dilute the effect.

The primary criterion for selection is that the nominee has made one or more unusually significant contributions to the success of C++. Such contributions may have been made in the areas of design, specification, implementation, application, explanation, popularization, or any other aspect of C++ that the Selection Committee deems appropriate.

The Selection Committee may consider negative factors outside the realm of C++ when determining whether a nominee is worthy of HoF membership. If a nominee is guilty of a heinous crime, for example, the Selection Committee may take that into account when deciding whether to select the nominee for the HoF.


The Steering Committee will determine how inductions are to take place. I suggest that CppCon schedule an induction ceremony as part of its program, during which new inductees are awarded a membership token (e.g., certificate, trophy, gaudy ring) and given time to make public comments marking the occasion.

The first group of HoF members will be selected before CppCon 2020. This will make it possible for them to participate in an induction ceremony during the conference.


The Steering Committee will determine what form the HoF will take. I suggest beginning with a HoF web site ( that showcases each member and summarizes the contributions that led to their inclusion.

Saturday, January 19, 2019

Adventures in UX disasters: The Pioneer AVH-2440NEX dimmer control

To provide a display for the backup camera I recently had installed on my car, I had a Pioneer AVH-2440NEX head unit installed in my dashboard. The display was distractingly bright at night, so I set out to dim it. The unit supports automatic night dimming, so I figured this would be easy. It is, but only after you've endured a UX hazing ritual of the kind that's distressingly common in the software industry.

On the AVH-2440NEX (and related models), there is a display setting called Brightness. It does not control the brightness of the display. It controls the blackness of the display. The brightness is controlled by the Dimmer setting. Dimmer has a range of 48 values, 1 to 48. Larger Dimmer settings decrease the dimness of the display, because Dimmer controls the display's brightness.

Values for Brightness (which do not control the display's brightness) are -24 to 24.

To summarize: The display brightness is controlled by a setting called Dimmer, which has a range of 48 values starting at 1, with higher values decreasing the dimness. The display blackness, in contrast, is controlled by a setting called Brightness, which has a range of 49 values that start at -24.


Think of all the professional developers--UX designers, programmers, QA people, managers--who had to sign off on this before it shipped to customers. I don't understand how they could collectively believe that this is a reasonable (much less intuitive) design for mainstream consumers.


Saturday, September 1, 2018

The Errata Evaluation Problem

I no longer plan to update my books to fix technical errors.

It's not that I'm too lazy to do it. It's that in order to fix errors, I have to be able to identify them. That's something I no longer trust myself to do.

If you write books and you're anything like me, you make mistakes. You can read and reread 'til your eyes bleed, test 'til your heart gives out, proofread 'til doomsday, and cajole the most exacting technical experts into reviewing your manuscript 'til they stop answering your email, and still you'll publish stuff that's wrong. Some of it will be laughably wrong. I don't know why. That's just the way it is. At least that's the way it's been for me.

Since originally publishing Effective C++ at the end of 1991, I've done my best to fix errors in my books as soon as possible after I found out about them. When I found out about a bug in printing n of a book, I normally worked with my publisher to fix it in printing n+1.

I most commonly find out about bugs from readers. They send email describing what they think is a problem, often including what they believe is a fix. If you look over my books' errata lists (links are at the bottom of this page), you'll see hundreds of problems I've addressed in response to reader reports. I'm grateful for every report I've received. Each time I updated a book to include fixes stemming from reader reports, I've updated the book's acknowledgements to include the names of the readers whose reports have improved the new printing.

In my experience, most bug reports are valid. But some are not. Sometimes readers assume that the compiler they use is Standard-conformant, but it's not. Sometimes they are unaware of or misunderstand provisions in the Standard. Sometimes they make mistakes copying the code out of the book before running their tests. Their best efforts notwithstanding, readers, like me, are fallible.

So when I get a bug report, the first thing I do is evaluate whether it's valid. Given the technical nature of my books, the complexity of C++, and the finickiness of my readers, this is often  challenging. Separating valid bug reports from (sometimes subtly) invalid reports requires I be at the top of my game. Otherwise, I risk rejecting legitimate bug reports or, worse, editing my books to incorporate invalid revisions.

Having retired from active involvement in C++ over two and a half years ago, I'm no longer at the top of my C++ game. That's been true for a while, but until recently, I've remained confident in my ability to assess incoming bug reports. Recently, however, a report came in where, having thought about it for a while, I realized that I just didn't know whether it was valid. Rather than give myself a crash course in C++ to the point where I could make an accurate determination, I decided to throw in the towel. I sent this to my reader:
As you may know, I retired from active involvement in C++ at the end of 2015, and in the ensuing two and a half years, I’ve forgotten enough details of the language that I am no longer able to properly evaluate bug reports regarding the technical aspects of my books. C++ is a large, intricate language with features that interact in complex and subtle ways, and I no longer trust myself to keep all the relevant facts in mind. As a result, all I can do is thank you for your bug report, because I no longer plan to update my books to incorporate technical corrections. Lacking the ability to fairly evaluate whether a bug report is valid, I think this is the only responsible course of action.
From now on, if you send me a bug report about technical material in my books, you'll probably get the same response.

This applies only to the technical material in my books. It just so happens that my brain here is only mostly dead. For the time being, I figure I can still evaluate the accuracy of reports about incorrect fonts, missing words, improper formatting, etc. So if you find an error of the non-technical variety, let me know. Heck, if you find what you believe is a technical error, go ahead and send it to me, if you want to. Just don't be surprised if what you get in response looks a lot like the reply above.


Saturday, June 16, 2018

Minor Change to Blog Charter

Until today, this blog has been about "Scott Meyers' Professional Activities and Interests." I've just removed the "Professional," so now the blog is about "Scott Meyers' Activities and Interests." In theory, this means I can now blog about anything, though in practice, you're unlikely to notice much change. I'm not planning anything dramatic. In fact, I'm not planning anything at all. I just thought it'd be a good idea to relax the blog's thematic constraints.


Monday, June 11, 2018

Interesting Book: The Modern C++ Challenge

I recently became aware of a nifty new book about C++, The Modern C++ Challenge. Today I saw that the ebook is available for $10, which strikes me as quite the bargain.

Before I tell you why I think the book is interesting, let me dispense with some caveats. First, I haven't read the entire book, I've only looked at parts of it. Second, I haven't looked closely enough at the source code to evaluate it. (Because the book uses some C++17 features and my involvement with C++ ended with C++14, I wouldn't really be able to fairly evaluate it, anyway.) Finally, I got the book for free when Packt sent me a (digital) copy.

Two things struck me when I looked inside the book:
  • The "Modern" in The Modern C++ Challenge is as modern as you can get: C++17 (with the occasional mention of C++20).  To run the solutions to the problems in the book, you'll need a C++17-conformant compiler.
  • The "C++" in The Modern C++ Challenge is broader than just the language proper and its standard library. The list of software used by the book includes over a dozen third-party cross-platform libraries, including Boost, Asio, Crypto++, Curl, NLohmann/json, PDF-Writer, PNGWriter, pugixml, SQLite, and ZipLib.
The book itself consists of a series of programming problems ("challenges") and sample solutions. Some are simple, such as Problem 1:
Write a program that calculates and prints the sum of all the natural numbers divisible by either 3 or 5, up to a given limit entered by the user.
Others are more difficult, such as Problem 22:
Write a small library that enables expressing temperatures in the three most used scales, Celsius, Fahrenheit, and Kelvin, and converting between them. The library must enable you to write temperature literals in all these scales, such as 36.5_deg for Celsius, 97.7_f for Fahrenheit, and 309.65_K for Kelvin; perform operations with these values; and convert between them.
All in all, there are 100 problems in a variety of areas, including string processing, dates and time, concurrency, cryptography, and networking.

Because the book isn't afraid to lean on third-party libraries, some of the problems ask you do to things that the standard library can't touch. For example, here's the last problem in the book:
Write a program that can identify people's faces from pictures. At a minimum, the program must detect the face area and the gender of the person. This information should be printed to the console. The pictures must be loaded from the disk.
Wow. Unless there have been big changes to the STL since C++14, there's no "gender_from_image" functionality in the standard library. I wouldn't know where to start. The book's solution begins with some really useful information:
This is yet another problem that can be solved using Microsoft Cognitive Services. One of the services available in this group, called Face API, provides algorithms for detecting faces, gender, age, emotion, and various face landmarks and attributes, as well as the ability to find face similarities, identify people, group pictures based on visual faces similarities, and others.
This is representative of what I view as a strength of the book: the ability to introduce you to libraries and APIs beyond standard C++ that you may not be familiar with. I think that's an important contribution to C++ and its effective application, and between that and the use of features new to C++17, I think it makes the book worth looking into.

The Modern C++ Challenge is currently available for ten bucks for the digital versions of the book and for $35 for the digital and print combo platter. I think that's very reasonable pricing, and, no, I don't get anything for encouraging you to look at the book, nor do I get a kickback of any kind on sales. I just think the book looks really interesting.


Thursday, May 31, 2018

CppCon Workshop on Giving Good Technical Presentations

On Sunday, September 23 (the day before the official beginning of CppCon), Andrei Alexandrescu and Kate Gregory and I will be leading a workshop on how to give good technical presentations. Between the three of us, we've made hundreds (thousands?) of presentations on countless topics to pretty much every kind of audience. We certainly don't know everything that works (or doesn't), but we know some things, and we're eager to share what we've learned.

The workshop is really three workshops in one. Most of the day will be spent in breakout sessions, with Kate, Andrei and me each running a session in our own way. Each workshop participant will spend one breakout with each of us.  The details will vary, but each session will feature a short presentation by each workshop participant, so if you're part of the workshop, by the end of the day, you'll have received personalized suggestions from each of us on what you did well and what we think would help you do better.

For details about the workshop, consult its web page.

This is an interactive workshop, so attendance is limited. Half the slots have been set aside for the conference to offer to first-time CppCon speakers. The other half are open to anyone who wants to improve their technical presentation skills. If that includes you, I encourage you to sign up for the workshop.


Wednesday, May 16, 2018

Effective Modern C++ in Simplified Chinese!

In mid-2015, I was told that translations of Effective Modern C++ into both traditional and simplified Chinese had been authorized. About a year later, the traditional translation showed up on my doorstop (as I noted here). It's been nearly two more years, but the other shoe has finally dropped: EMC++ is now available in simplified Chinese.

Like the original English edition of the book (but unlike most translations), the simplified Chinese version uses multiple ink colors. Readers should thus benefit from the information that conveys.

I'm pleased to welcome this translation into the Effective Modern C++ family. In theory, this makes the information in it accessible to over a billion additional people, so I'll be looking for O'Reilly to find a way to sell it to nearly all of them :-) If you'd like to be a customer, I'm told the place to buy the book--or at least a place to buy it--is here.


Saturday, May 5, 2018

New ESDS Book: More Effective C#, Second Edition

Addison-Wesley released Bill Wagner's new edition of More Effective C# last August, but I didn't find out about it and get a copy until a few days ago. The series editor is always the last to know!

If you're a C# programmer, I encourage you to give the book a close look. It's easy to do that, because Bill and Addison-Wesley have made unusually generous excerpts available at the book's web site (in the "Sample Content" tab): Chapter 2 (16 Items) is available as a freely-downloadable PDF, and Chapter 3 (8 Items) is available online. Together, that's nearly half the book you can read before you put down any money!


Monday, September 18, 2017

Brief Appearance at CppCon

This year's CppCon includes two panel discussions devoted to technical training, and I'll be on the one on Monday, September 25. Other members of the panel will be Giuseppe D'Angelo, Stephen Dewhurst, Kate Gregory, and Anthony Williams. The moderator will be Jon Kalb, who's also an experienced trainer. Together, we've probably indoctrinated many thousands of developers in the ways we believe to be right and just in the battle between programmer and machine.

Most people would probably date my work with C++ to the initial publication of Effective C++ in late 1991, but I'd been training professional programmers for several years before that, and since retiring from C++ involvement at the end of 2015, I've given a few more presentations on non-C++ technical topics (most recently a couple of weeks ago). All told, I have close to 30 years' experience training professional software developers, so I'd like to think I know a thing or two about it. To find out if I do, I encourage you to attend the panel session.

Monday will be the only day I'll be at the conference, so if you want to hunt me down to say hello, that'll be the day to do it.


Wednesday, July 5, 2017

Sales Data for EMC++: Print Books, Digital Books, and Online Access

O'Reilly President Laura Baldwin's recent blog post explaining O'Reilly's decision to discontinue selling individual books and videos through their web site (while continuing to publish books and videos for sale through other channels) inspired me to take a look at the sales data I have for Effective Modern C++. I wrote that book with both print and electronic publication in mind, assuming that by the time it came out, demand for digital formats would be at least as strong as demand for print products.

That has not proven to be the case. I have data for the first 35 months of the book's existence (through May 2017), and since initial publication, sales of digital editions make up only about 41% of the over 50,000 units (i.e., copies of the book) sold. Here's a chart of print sales versus ebook sales by month:
Because it takes more time to print books than to make them available on the Internet, the digital versions were downloadable four months before the print books came out. That's apparent at the left side of the chart. Since then, print sales have beaten ebook sales almost every month. Most of the time, it hasn't been much of a contest.

These data exclude sales of foreign language translations of the book. My royalty statements don't break down sales of translations into print and digital formats.

It's clear that buyers of EMC++ have a pretty strong preference for the paper version. This is consistent with sales data for my other books (Effective C++, Effective STL, More Effective C++), but those books were initially published before digital books took off, and they were never designed for digital consumption. The fact that print sales dominates for them is not a surprise.

O'Reilly is getting out of the retail book and video sales business in order to focus on its online subscription service, Safari. Baldwin states that that side of the business has the most customers and is growing the fastest. I don't doubt her. But what does that mean for me?

Here's the royalty source data for Effective Modern C++, broken down into "Online" sources (which includes Safari) and "Other." Included in "Other" is all sales of complete books, regardless of format. Ebook sales are thus "Other", not "Online".
As you can see, the online component of my royalties (including Safari) is generally under 10% each month. Summed over the course of the book's existence, the online contribution to my total royalties is only 5.7%.  There appears to be a slight upward trend over time, but it's hardly something that sets an author's heart aflutter. From a royalty point of view, sales of complete books is at least ten times as important to me as online access.

What do the data for Effective Modern C++ have to say about the trends in publishing Baldwin describes in her post?  Very little. A key observation in her post is that "digital enabled new learning modalities such as video and interactive content," and my book is an example of neither. She refers to how O'Reilly has long recognized that they aren't really in the book-publishing business, they're in the knowledge-spreading business. Books are one way to spread knowledge, but they aren't the only way, and from the perspective of a publisher, they are a way that's less and less important.

The charts above demonstrate that regardless of the general movement in the information-dissemination business towards digital, non-book-like, subscription-based models, complete books--especially print books--are, at least in the case of my readership, very much alive and kicking.

Friday, June 30, 2017

O'Reilly's Decision and its DRM Implication

On Wednesday, I got mail from Laura Baldwin, President of O'Reilly, announcing that "as of today, we are discontinuing fulfillment of individual book and video purchases on Books (both ebook and print) will still be available for sale via other digital and bricks-and-mortar retail channels...[and] of course, we will continue to publish books and videos..." So O'Reilly's not getting out of the book and video publishing business, it's just getting out of the business of selling them at retail. For details, check out Laura's blog entrythis story at Publishers Weekly or these discussions at Slashdot or Hacker News.

To me, the most interesting implication of this announcement is that O'Reilly's no-DRM policy apparently resonated little with the market. Other technical publishers I'm familiar with (e.g., Addison-Wesley, the Pragmatic Programmer, Artima) attempt to discourage illegal dissemination of copyrighted material (e.g., books in digital form) by at least stamping the buyer's name on each page. O'Reilly went the other way, trusting people who bought its goods not to give them to their friends or colleagues or to make them available on the Internet.

I don't know what motivated that policy. Perhaps it was a belief that trusting buyers was the right thing to do. But I can't help but think they took into account the effect it would likely have on sales. After all, publishing is a business.

Piracy is a double-edged sword. On the one hand, it means you receive no compensation for the benefit readers get from the work you put in. On the other hand, pirated books act as implicit marketing, expanding awareness of you and your book(s). They can also reach buyers who want to see the full product before making a purchasing decision or who wouldn't become aware of your book through conventional marketing efforts.

My feeling is that most people who choose pirated books are unlikely to pay for them, even if that's the only way to get them. As such, I'm inclined to think the marketing effect of illegal copies exceeds the lost revenue. I have no data to back me up. Maybe it's just a rationalization to help me live with the knowledge that no matter what you do, there's no way you can prevent bootleg copies of your books from showing up on the Net.

My guess is that a component of O'Reilly's no-DRM policy was a hope that it would distinguish O'Reilly from other publishers and would attract buyers who felt strongly about DRM. Whether it did that, I don't know, but O'Reilly's decision to stop selling individual products at its web site suggests that DRM (or the lack thereof) is not an important differentiator for most buyers of technical books and videos.

Wednesday, May 17, 2017

Interview with Me (in Hungarian)

Last month, I was invited to give a presentation at NNG in Budapest. During my visit to NNG, I was asked to talk with some people from HWSW, and the resulting interview has now been published. If you're comfortable with Hungarian (or with the results of a translation from Hungarian into whatever language you prefer), I encourage you to take a look.

In reading the interview, it may be helpful to know that the talk I gave at NNG was a shorter version of the presentation I gave at DConf earlier this month, "Things that Matter."



Tuesday, March 14, 2017

Keynote at DConf in Berlin on May 5

The folks behind the annual conference for the D programming language offered me a soapbox for my most fundamental beliefs about software and software development, so on Friday, 5 May, I'll be speaking in Berlin at DConf about

Things That Matter

In the 45+ years since Scott Meyers wrote his first program, he’s played many roles: programmer, user, educator, researcher, consultant. Different roles beget different perspectives on software development, and so many perspectives over so much time have led Scott to strong views about the things that really matter. In this presentation, he’ll share what he believes is especially important in software and software development, and he’ll try to convince you to embrace the same ideas he does.
Because this isn't a C++ talk, I sent the DConf organizers a more general bio than I usually use. It may include some things about me you don't know, so perhaps you'll find it interesting:
Scott Meyers started programming in 1971, and he started teaching programming in 1972. He’s best known for his Effective C++ books, but he’s also worked on constraint expression for programming languages, program representations in development environments, software simulations of bacteriophage lambda, general principles for improving software quality, and the effective presentation of technical information. In 2009, he received the Dr. Dobb’s Excellence in Programming Award, and in 2014, an online poll likened his hair style to that of the cartoon character, He-Man.
If you're working with or interested in D, I encourage you to consider attending the conference. If so, be sure to stop by and say hello after my talk!


Friday, February 3, 2017

By the Numbers: The Great Foreign Edition Book Giveaway

A couple of months ago, I offered to give away foreign editions of my books, asking recipients only that they reimburse me for the postage. Here are some numbers associated with the giveaway.
  • 112: Books I had to give away.
  • 70: Books I gave away. (There were no requests for the others.)
  • 65: People who requested books.
  • 37: People I sent books to. (It wasn't possible to satisfy all requests.)
  • 13: People whose requests overlooked the requirement to include a mailing address. (Such requests were moved to the bottom of the priority list. Some still got satisfied, because they were for books for which no higher-priority requests came through. In those cases, I pinged the requesters for mailing addresses.)
  • 21: Countries to which I was asked to send books.
  • 13: Countries to which I sent books. (It still wasn't possible to satisfy all requests.)
  • 26: Requests for Effective Modern C++ in Russian (the most frequently requested book).
  • 1: Copies of Effective Modern C++ in Russian I had to give away.
  • 5: Maximum number of books sent to any single requester. (These books were in Japanese, but the mailing address was in Sweden, and the request came from someone with an email provider in Italy, so it appears that an Italian in Sweden requested books in Japanese :-}.)
  • 905.65: Total cost of postage for books I sent (in US dollars).
  • 75.4: Percent of this cost I've so far been reimbursed.

Tuesday, January 31, 2017

Updated Versions of EC++/3E and EMC++

New printings of Effective C++, Third Edition and Effective Modern C++ have recently been published by Addison-Wesley and O'Reilly, respectively. Both printings include fixes for all the errata that had been reported through December, though a couple of bug reports for EMC++ have since trickled in, sigh. For EC++/3E, the new printing is number 17. For EMC++, it's 10.

If you purchased digital copies of these books from the publisher, you should be able to log in to your account and download the latest versions. (O'Reilly customers should have received a notification to this effect. AW doesn't seem to tell people when new printings are available for download.)

If you purchase print copies of these books, I encourage you to make sure you're getting the latest versions. I have copies of the latest printings, so I know they exist in print form.

I hope you enjoy the latest revisions of these books. They should be the best versions yet.


Wednesday, December 28, 2016

New ESDS Book: Effective SQL

SQL finally gets the effective treatment. That's an accomplishment, because despite an official ISO standard for SQL, there's enough variation among common offerings that the authors of Effective SQL felt obliged to test their code (e.g., schemas, queries, etc.) on six different implementations. They also point out syntactic and semantic differences between "official" SQL and the SQL you're probably using. 

Pulling off that kind of feat calls for lots of experience, both with SQL and with explaining it to others. Authors John Viescas, Doug Steele, and Ben Clothier have it in spades. They're pushing a century of IT experience (!), and they've published more than a half-dozen books on databases, SQL, or both. It's hard to get better than that.

If you work with SQL, you owe it to yourself to take a look at Effective SQL.


Tuesday, December 27, 2016

New ESDS Book: Effective C#, Third Edition

The third incarnation of Bill Wagner's best-selling Effective C# has flown off the presses, and a copy has landed on my desk. Apparently it's flying off the shelves, too, because it's currently Amazon's #1 new release in the category of Microsoft C and C++ Windows Programming. If you'd like the book to land on your desk as well as mine, you might want to place your order quickly.

This revision of Effective C# is part one of a two-park comprehensive update Bill is undertaking for both his C# titles (the other being More Effective C#). For details on the motivation for the updates and his thinking about them, check out Bill's recent blog post.

Happy C#ing!


Effective Modern C++ in Portuguese!

The latest addition to the Effective Modern C++ family goes by C++ Moderno e Eficaz and targets readers of Portuguese. My understanding is that the book's been out for a few months, but my copy arrived only a few days ago.

Like most foreign translations of EMC++, this one uses just one ink color, so if you're comfortable with technical English, I recommend the four-color English (American) edition. However, if Portuguese descriptions of C++11 and C++14 features is your preferred cup of tea, this is the brew for you!


Sunday, November 27, 2016

The Great Foreign Edition Book Giveaway

One of the nicer author perks is seeing your books appear in translation. In my 2003 Advice to Prospective Book Authors, I wrote:
Few things evoke quite the level of giddiness as seeing a copy of your book in a foreign script. I, for one, cherished my books in Chinese, and I continued to cherish them even after I found out that they were actually in Korean.
My publishers generally send me at least one copy of each translation they authorize. I often receive several copies, however, and over the years, I've amassed  more copies of my books in foreign languages than I have use for. Look!—these are the extra copies I currently have:

Instead of letting these books gather more dust, I've decided to give them away. Want one? Just ask. I'll autograph it for you and throw it in the mail, and all I'll request in return is that you cover the cost of postage.

I'll describe the details of how the giveaway works in a moment, but first let me show you the available inventory. Most books are in a language other than English, but what I'm technically giving away are foreign editions, so a few have the same text as the US book (i.e., they're in English). Such editions are generally printed on cheaper paper than their US counterparts, and like almost all the books I'm giving away, they use only one ink color, even if the US version uses multiple colors.

Here's what I've got:

Things to bear in mind:

  • For books with two ISBN lines, each line represents a distinct ISBN for the book. The upper one is the older ISBN-10. The lower one is the newer ISBN-13. (ISBN-10 vs ISBN-13 is the publishing equivalent of IPv4 vs. IPv6.)
  • Sometimes there are multiple versions of the same translation, e.g., there are two entries for German and for Japanese translations of Effective C++, Third Edition. In such cases, the only difference is typically the cover design. As far as I know, the substance of all translations of a particular book into a particular language is the same.
  • In the table, "Chinese" is ambiguous, because there are two versions of printed Chinese: traditional and simplified. To find out which Chinese is meant, use your favorite search engine to look up a book's ISBN.
  • I've tried to list accurate languages for the books, but, not being able to read most of them, I may have made a mistake here and there. If so, I apologize, and I hope you'll bring the errors to my attention.
  • The first two editions of Effective C++ are either old or really old. Both are out of date. They might be suitable for a C++ museum, or maybe you could employ them as research material for that Scott Meyers biography you've been working on (ahem), but the programming advice in these editions is not to be trusted. I'll send them to you if you ask me to, but before you make a request, think carefully about why you're doing it. It shouldn't be to improve your C++.

How the giveaway works:

  • If you'd like a book, send me email letting me know what you want and the address to which I should send it. If you'd like more than one book, that's fine, just list the books in priority order. (I'll ignore book requests posted as comments to this blog, sorry.)
  • I'll let the requests roll in for about two weeks (until about December 9), then I'll decide who gets what on whatever basis I want. My general plan is to assign higher priority to earlier requests and to issue everybody one book before issuing anybody more than one (i.e., to use a pseudo-FIFO pseudo-round-robin algorithm), but my plan might change. If your request includes an unusually good reason to satisfy it, I'll increase your priority. (An example of an unusually good reason would be that you'd like books to stock a library, thus making them available to many people.)
  • At some point (by December 16, I hope), I'll let you know whether I can satisfy your request. If I can, I'll put your book(s) in the mail, let you know how much the postage is, and request that you send me that much by Paypal. As it happens, I've gone down this road a couple of times in the past, and some of the promised payments never materialized. Nevertheless, my faith in the basic honesty of C++ software developers endures. I'd appreciate it if you wouldn't do anything to change that.
Soooo...who wants a book that I can't read, that's out of date, or both?


Monday, November 21, 2016

Help me sort out the meaning of "{}" as a constructor argument

In Effective Modern C++, one of the explanations I have in Item 7 ("Distinguish between () and {} when creating objects") is this:
If you want to call a std::initializer_list constructor with an empty std::initializer_list, you do it by making the empty braces a constructor argument—by putting the empty braces inside the parentheses or braces demarcating what you’re passing:
class Widget {
  Widget();                                   // default ctor
  Widget(std::initializer_list<int> il);      // std::initializer_list ctor
  …                                           // no implicit conversion funcs

Widget w1;          // calls default ctor
Widget w2{};        // also calls default ctor
Widget w3();        // most vexing parse! declares a function!    

Widget w4({});      // calls std::initializer_list ctor with empty list
Widget w5{{}};      // ditto  
I recently got a bug report from Calum Laing saying that in his experience, the initializations of w4 and w5 aren't equivalent, because while w4 behaves as my comment indicates, the initialization of w5 takes place with a std::initializer_list with one element, not zero.

A little playing around showed that he was right, but further playing around showed that changing the example in small ways changed its behavior. In my pre-retirement-from-C++ days, that'd have been my cue to dive into the Standard to figure out what behavior was correct and, more importantly, why, but now that I'm supposed to be kicking back on tropical islands and downing piña coladas by the bucket (a scenario that would be more plausible if I laid around on beaches...or drank), I decided to stop my research at the point where things got complicated. "Use the force of the Internet!," I told myself. In that spirit, let me show you what I've got in the hope that you can tell me why I'm getting it. (Maybe it's obvious. I really haven't thought a lot about C++ since the end of last year.)

My experiments showed that one factor affecting whether "{{}}" as an argument list yields a zero-length std::initializer_list<T> was whether T had a default constructor, so I threw together some test code involving three classes, two of which could not be default-constructed. I then used both "({})" (note the outer parentheses) and "{{}}" as argument lists to a constructor taking a std::initializer_list for a template class imaginatively named X. When the constructor runs, it displays the number of elements in its std::initializer_list parameter.

Here's the code, where the comments in main show the results I got under all of gcc, clang, and vc++ at  Only one set of results is shown, because all three compilers produced the same output.
#include <iostream>
#include <initializer_list>

class DefCtor {

class DeletedDefCtor {
  DeletedDefCtor() = delete;

class NoDefCtor {

template<typename T>
class X {
  X() { std::cout << "Def Ctor\n"; }
  X(std::initializer_list<T> il)
    std::cout << "il.size() = " << il.size() << '\n';

int main()
  X<DefCtor> a0({});           // il.size = 0
  X<DefCtor> b0{{}};           // il.size = 1
  X<DeletedDefCtor> a2({});    // il.size = 0
  X<DeletedDefCtor> b2{{}};    // il.size = 1

  X<NoDefCtor> a1({});         // il.size = 0
  X<NoDefCtor> b1{{}};         // il.size = 0
These results raise two questions:
  1. Why does the argument list syntax "{{}}" yield a one-element std::initializer_list for a type with a default constructor, but a zero-element std::initializer_list for a type with no default constructor?
  2. Why does a type with a deleted default constructor behave like a type with a default constructor instead of like a type with no default constructor?
If I change the example to declare DefCtor's constructor explicit, clang and vc++ produce code that yields a zero-length std::initializer_list, regardless of which argument list syntax is used:
class DefCtor {
  explicit DefCtor(){}             // now explicit


X<DefCtor> a0({});           // il.size = 0
X<DefCtor> b0{{}};           // il.size = 0 (for clang and vc++)  
However, gcc rejects the code:
source_file.cpp:35:19: error: converting to ‘DefCtor’ from initializer list would use explicit constructor ‘DefCtor::DefCtor()’
   X<DefCtor> b0{{}};
gcc's error message suggests that it may be trying to construct a DefCtor from an empty std::initializer_list in order to move-construct the resulting temporary into b0. If that's what it's trying to do, and if that's what compilers are supposed to do, the example would become more complicated, because it would mean that what I meant to be a series of single constructor calls may in fact include calls that create temporaries that are then used for move-constructions.

We thus have two new questions:
  1. Is the code valid if DefCtor's constructor is explicit?
  2. If so (i.e., if clang and vc++ are correct and gcc is incorrect), why does an explicit constructor behave differently from a non-explicit constructor in this example? The constructor we're dealing with doesn't take any arguments.
The natural next step would be to see what happens when we declare the constructors in DeletedDefCtor and/or NoDefCtor explicit, but my guess is that once we understand the answers to questions 1-4, we'll know enough to be able to anticipate (and verify) what would happen. I hereby open the floor to explanations of what's happening such that we can answer the questions I've posed. Please post your explanations in the comments!

---------- UPDATE ----------

As several commenters pointed out, in my code above, DeletedDefCtor is an aggregate, which is not what I intended. Here's revised code that eliminates that. With this revised code, all three compilers yield the same behavior, which, as noted in the comment in main below, includes failing to compile the initialization for b2. (Incidentally, I apologize for the 0-2-1 ordering of the variable names. They were originally in a different order, but I moved them around to make the example clearer, then forgot to rename them, thus rendering the example probably more confusing, sigh.)
#include <iostream>
#include <initializer_list>
class DefCtor {
  int x;
class DeletedDefCtor {
  int x;
  DeletedDefCtor() = delete;
class NoDefCtor {
  int x;    
template<typename T>
class X {
  X() { std::cout << "Def Ctor\n"; }
  X(std::initializer_list<T> il)
    std::cout << "il.size() = " << il.size() << '\n';
int main()
  X<DefCtor> a0({});           // il.size = 0
  X<DefCtor> b0{{}};           // il.size = 1
  X<DeletedDefCtor> a2({});    // il.size = 0
  // X<DeletedDefCtor> b2{{}};    // error! attempt to use deleted constructor
  X<NoDefCtor> a1({});         // il.size = 0
  X<NoDefCtor> b1{{}};         // il.size = 0
This revised code renders question 2 moot.

The revised code exhibits the same behavior as the original code when DefCtor's constructor is declared explicit: gcc rejects the initialization of b0, but clang and vc++ accept it and, when the code is run, il.size() produces 0 (instead of the 1 that's produced when the constructor is not explicit).

---------- RESOLUTION ----------

Francisco Lopes, the first person to post comments on this blog post, described exactly what was happening as regards questions 1 and 2 about the original code I posted. The only thing he didn't do was cite sections of the Standard, which I can hardly fault him for. From my perspective, the key provisions in the C++14 Standard are
  • ([over.match.list]), which says that when you have a braced initializer for an object, you first try to treat the entire initializer as an argument to a constructor taking a std::initializer_list. If that doesn't yield a valid call, you fall back on viewing the contents of the braced initializer as constructor arguments and perform overload resolution again.
  • 8.5.4/5 ([dcl.init.list]/5), which says that if you're initializing a std::initializer_list from a braced initializer, you copy-initialize each element of the std::initializer_list from the corresponding element of the braced initializer. The relevance of this part of the Standard was brought to my attention by Marco Alesiani in his comment below.
The behavior of the initializations of a0 and b0, then, can be explained as follows:
X<DefCtor> a0({});  // The arg list uses parens, not braces, so the only ctor argument is
                    // "{}", which, per ([over.ics.list]/2) becomes an empty
                    // std::initializer_list. (Thanks to tcanens at reddit for the 
                    // reference to

X<DefCtor> b0{{}};  // The arg list uses braces, so the ctor argument is "{{}}", which is
                    // an initializer list with one element, "{}". DefCtor can be
                    // copy-initialized from "{}", so the ctor's std::initializer_list
                    // param contains a single default-constructed DefCtor object.
I thus understand the error in Effective Modern C++ that Calum Laing brought to my attention. The information in the comments (and in this reddit subthread) regarding how explicit constructors affect things is just a bonus.

Thanks to everybody for helping me understand what was going on. All I have to do now is figure out how to use this newfound understanding to fix the problem in the book...

Wednesday, November 9, 2016

Test Post -- Please Ignore

This is test content. Please ignore.

Monday, August 8, 2016

Interview with Me (in Korean)

My keynote address at NDC in Seoul got the Korean tech press interested in talking to me, and the interview Jihyun Lee conducted has now been published at Bloter.

As a rule, I read through my interviews before blogging about their existence, because, hey?!, who knows what I said? But since the interview is published in Korean, I skipped that step. If you read Korean as easily as you read C++, I hope you enjoy the interview. If you enjoy it enough to translate it into English (or if you find a translation floating around the Internet somewhere), please let me know.


Tuesday, June 21, 2016

Effective Modern C++ in Traditional Chinese!

Yesterday I received an interesting-looking box in the mail. The contents were even more interesting: the translation of Effective Modern C++ into Traditional Chinese!

This translation uses only one ink color (black), so if you're comfortable with technical English, you're probably better off with the English (American) edition.  If you prefer your C++ with a traditional Chinese flair, however, this new edition is the one for you.

EMC++ has now been translated into the following languages:
  • German
  • Italian
  • Polish
  • Japanese
  • Korean
  • Russian
  • French
  • Traditional Chinese
My understanding is that translations into Portuguese and Simplified Chinese are also in the works. If you're aware of other translations, please let me know.

In the meantime, enjoy the new Chinese translation of EMC++.


Monday, April 25, 2016

Thursday's NDC Presentation will be live, but remote

Recent developments have conspired to prevent me from attending this week's Nexon Developers Conference in Seoul, but I'll still be making my keynote presentation, "Modern C++ Beyond the Headlines." The talk will be live, but I'll be at home instead of in the conference hall. The heavy lifting on the communications front will be handled by Skype.

The keynote will take place at  5:05PM local time at the conference, which will be 1:05AM local time for me. It should be interesting to see who suffers more: the conference attendees at the end of a long day or me at the end of a longer one :-)


Monday, April 4, 2016

Presentation at Nexon Developers Conference in Seoul on April 28

In my "retirement from active involvement in C++" post at the end of last year, I wrote:
I may even give one more talk. (A potential conference appearance has been in the works for a while. If it gets scheduled, I'll let you know.)
Well, it's been scheduled, and I'm letting you know: I'll be giving a presentation at the Nexon Developers Conference in Seoul on April 28. The topic is "Modern C++ Beyond the Headlines," and I plan to talk about how some features in C++11/14 are better than they appear at first glance (e.g., constexpr), while others are likely to be less attractive than they initially seem (e.g., emplacement).

There are no talks in the pipeline after this one, and I've been holding fast on my decision not to accept new engagements, so in all likelihood, this is the last C++ presentation I'll make. If you want to be there to see if I botch the landing, the Nexon Developers Conference at the end of the month is the place to be!


Monday, March 28, 2016

Effective Modern C++ in French!

Et Voilà! The French edition of Effective Modern C++ has just arrived at my desk, so it should be available for you, too.

This version of the book uses only one ink color (black), so if you're comfortable with technical English, I suspect you'll prefer the four-color English (American) edition. But if you like your C++ in French (including the code comments!), this new edition is your ami.


Thursday, December 31, 2015

} // good to go

Okay, let's see what we've got. Two sets of annotated training materials. Six books. Over four dozen online videos. Some 80 articles, interviews, and academic papers. A slew of blog entries, and more posts to Usenet and StackOverflow than you can shake a stick at. A couple of contributions to the C++ vernacular. A poll equating my hair with that of a cartoon character.

I think that's enough; we're good to go. So consider me gone. 25 years after publication of my first academic papers involving C++, I'm retiring from active involvement with the language.

It's a good time for it. My job is explaining C++ and how to use it, but the C++ explanation biz is bustling. The conference scene is richer and more accessible than ever before, user group meetings take place worldwide, the C++ blogosphere grows increasingly populous, technical videos cover everything from atomics to zero initialization, audio podcasts turn commute-time into learn-time, and livecoding makes it possible to approach C++ as a spectator sport. StackOverflow provides quick, detailed answers to programming questions, and the C++ Core Guidelines aim to codify best practices. My voice is dropping out, but a great chorus will continue.

Anyway, I'm only mostly retiring from C++. I'll continue to address errata in my books, and I'll remain consulting editor for the Effective Software Development Series. I may even give one more talk. (A potential conference appearance has been in the works for a while. If it gets scheduled, I'll let you know.)

"What's next?," you may wonder. I get that a lot. I've spent the last quarter century focusing almost exclusively on C++, and that's caused me to push a lot of other things to the sidelines. Those things now get a chance to get off the bench. 25 years of deferred activities begets a pretty long to-do list. The topmost entry? Stop trying to monitor everything in the world of C++ :-)


Friday, December 4, 2015

Effective Modern C++ in Russian!

I haven't yet received a copy, but I have received word that there's now a Russian translation of Effective Modern C++. For details, please consult this page.

C++ in Cyrillic! What could be finer?


Tuesday, November 17, 2015

The Brick Wall of C++ Source Code Transformation

In 1992, I was responsible for organizing the Advanced Topics Workshop that accompanied the USENIX C++ Technical Conference. The call for workshop participation said:
The focus of this year's workshop will be support for C++ software development tools. Many people are beginning to experiment with the idea of having such tools work off a data structure that represents parsed C++, leaving the parsing task to a single specialized tool that generates the data structure. 
As the workshop approached, I envisioned great progress in source code analysis and transformation tools for C++. Better lints, deep architectural analysis tools, automatic code improvement utilities--all these things would soon be reality! I was very excited.

By the end of the day, my mood was different. Regardless of how we approached the problem of automated code comprehension, we ran into the same problem: the preprocessor. For tools to understand the semantics of source code, they had to examine the code after preprocessing, but to produce acceptable transformed source code, they had to modify what programmers work on: files with macros unexpanded and preprocessor directives intact. That means tools had to map from preprocessed source files back to unpreprocessed source files. That's challenging even at first glance, but when you look closer, the problem gets harder. I found out that some systems #include a header file, modify preprocessor symbols it uses, then #include the header again--possibly multiple times. Imagine back-mapping from preprocessed source files to unpreprocessed source files in such systems!

Dealing with real C++ source code means dealing with real uses of the preprocessor, and at that workshop nearly a quarter century ago, I learned that real uses of the preprocessor doomed most tools before they got off the drawing board. It was a sobering experience.

In the ensuing 23 years, little has changed. Tools that transform C++ source code still have to deal with the realities of the preprocessor, and that's still difficult. In my last blog post, I proposed that the C++ Standardization Committee take into account how source-to-source transformation tools could reduce the cost of migrating old code to new standards, thus permitting the Committee to be more aggressive about adopting breaking changes to the language. In this post, I simply want to acknowledge that preprocessor macros make the development of such tools harder than my last post implied.

Consider this very simple C++:
#define ZERO 0

auto x = ZERO;
int *p = ZERO;
In the initialization of x, ZERO means the int 0. In the initialization of p, ZERO means the null pointer. What should a source code transformation tool do with this code if its job is to replace all uses of 0 as the null pointer with nullptr? It can't change the definition of ZERO to nullptr, because that would change the semantics of the initialization of x. It could, I suppose, get rid of the macro ZERO and replace all uses with either the int 0 or nullptr, depending on context, but (1) that's really outside its purview (programmers should be the ones to determine if macros should be part of the source code, not tools whose job it is to nullptr-ify a code base), and (2) ZERO could be used inside other macros that are used inside other macros that are used inside other macros..., and especially in such cases, reducing the macro nesting could fill the transformed source code with redundancies and make it harder to maintain. (It'd be the moral equivalent of replacing all calls to inline functions with the bodies of those functions.)

I don't recall a lot of talk about templates at the workshop in 1992. At that time, few people had experience with them. (The first compiler to support them, cfront 3.0, was released in 1991.) Nevertheless, templates can give rise to the same kinds of problems as the preprocessor:
template<typename T>
void setToZero(T& obj) { obj = 0; }

int x;
setToZero(x);    // "0" in setToZero means the int

int *p;
setToZero(p);    // "0" in setToZero means the null pointer
I was curious about what clang-tidy did in these situations (one of its checks is modernize-use-nullptr), but I was unable to find a way to enable that check in the version of clang-tidy I downloaded (LLVM version 3.7.0svn-r234109). Not that it matters. The way that clang-tidy approaches the problem isn't the only way, and one of the reasons I propose a decade-long time frame to go from putting a language feature on a hit list to actually getting rid of it is that it's likely to take significant time to develop source-to-source translation tools that can handle production C++ code, macros and templates and all.

The fact that the problem is hard doesn't mean it's insurmountable. The existence of refactoring tools like Clang-tidy (far from the only example of such tools) demonstrates that industrial-strength C++ source transformation tools can be developed. It's nonetheless worth noting that such tools have to take the existence of templates and the preprocessor into account, and those are noteworthy complicating factors.

-- UPDATE --

A number of comments on this post include references to tools that chip away at the problems I describe here. I encourage you to pursue those references. As I said, the problem is hard, not insurmountable.

Friday, November 13, 2015

Breaking all the Eggs in C++

If you want to make an omelet, so the saying goes, you have to break a few eggs. Think of the omelet you could make if you broke not just a few eggs, but all of them! Then think of what it'd be like to not just break them, but to replace them with newer, better eggs. That's what this post is about: breaking all the eggs in C++, yet ending up with better eggs than you started with.

NULL, 0, and nullptr

NULL came from C. It interfered with type-safety (it depends on an implicit conversion from void* to typed pointers), so C++ introduced 0 as a better way to express null pointers. That led to problems of its own, because 0 isn't a pointer, it's an int. C++11 introduced nullptr, which embodies the idea of a null pointer better than NULL or 0. Yet NULL and 0-as-a-null-pointer remain valid. Why? If nullptr is better than both of them, why keep the inferior ways around?

Backward-compatibility, that's why. Eliminating NULL and 0-as-a-null-pointer would break existing programs. In fact, it would probably break every egg in C++'s basket. Nevertheless, I'm suggesting we get rid of NULL and 0-as-a-null-pointer, thus eliminating the confusion and redundancy inherent in having three ways to say the same thing (two of which we discourage people from using).

But read on.

Uninitialized Memory

If I declare a variable of a built-in type and I don't provide an initializer, the variable is sometimes automatically set to zero (null for pointers). The rules for when "zero initialization" takes place are well defined, but they're a pain to remember. Why not just zero-initialize all built-in types that aren't explicitly initialized, thus eliminating not only the pain of remembering the rules, but also the suffering associated with debugging problems stemming from uninitialized variables?

Because it can lead to unnecessary work at runtime. There's no reason to set a variable to zero if, for example, the first thing you do is pass it to a routine that assigns it a value.

So let's take a page out of D's book (in particular, page 30 of The D Programming Language) and zero-initialize built-ins by default, but specify that void as an initial value prevents initialization:
int x;              // always zero-initialized
int x = void;       // never zero-initialized
The only effect such a language extension would have on existing code would be to change the initial value of some variables from indeterminate (in cases where they currently would not be zero-initialized) to specified (they would be zero-initialized). That doesn't lead to any backward-compatibility problems in the traditional sense, but I can assure you that some people will still object. Default zero initialization could lead to a few more instructions being executed at runtime (even taking into account compilers' ability to optimize away dead stores), and who wants to tell  developers of a finely-tuned safety-critical realtime embedded system (e.g., a pacemaker) that their code might now execute some instructions they didn't plan on?

I do. Break those eggs!

This does not make me a crazy man. Keep reading.

std::list::remove and std::forward_list::remove

Ten standard containers offer a member function that eliminates all elements with a specified value (or, for map containers, a specified key): list, forward_list, set, multiset, map, multimap, unordered_set, unordered_multiset, unordered_map, unordered_multimap. In eight of these ten containers, the member function is named erase. In list and forward_list, it's named remove. This is inconsistent in two ways. First, different containers use different member function names to accomplish the same thing. Second, the meaning of "remove" as an algorithm is different from that as a container member function: the remove algorithm can't eliminate any container elements, but the remove member functions can.

Why do we put up with this inconsistency? Because getting rid of it would break code. Adding a new erase member function to list and forward_list would be easy enough, and it would eliminate the first form of inconsistency, but getting rid of the remove member functions would render code calling them invalid. I say scramble those eggs!

Hold your fire. I'm not done yet.


C++11's override specifier enables derived classes to make explicit which functions are meant to override virtual functions inherited from base classes. Using override makes it possible for compilers to diagnose a host of overriding-relating errors, and it makes derived classes easier for programmers to understand. I cover this in my trademark scintillating fashion (ahem) in Item 12 of Effective Modern C++, but in a blog post such as this, it seems tacky to refer to something not available online for free, and that Item isn't available for free--at least not legally. So kindly allow me to refer you to this article as well as this StackOverflow entry for details on how using override improves your code.

Given the plusses that override brings to C++, why do we allow overriding functions to be declared without it? Making it possible for compilers to check for overriding errors is nice, but why not require that they do it? It's not like we make type checking optional, n'est-ce pas?

You know where this is going. Requiring that overriding functions be declared override would cause umpty-gazillion lines of legacy C++ to stop compiling, even though all that code is perfectly correct. If it ain't broke, don't fix it, right? Wrong!, say I. Those old functions may work fine, but they aren't as clear to class maintainers as they could be, and they'll cause inconsistency in code bases as newer classes embrace the override lifestyle. I advocate cracking those eggs wide open.

Backward Compatibility 

Don't get me wrong. I'm on board with the importance of backward compatibility. Producing software that works is difficult and expensive, and changing it is time-consuming and error-prone. It can also be dangerous. There's a reason I mentioned pacemakers above: I've worked with companies who use C++ as part of pacemaker systems. Errors in that kind of code can kill people. If the Standardization Committee is going to make decisions that outlaw currently valid code (and that's what I'd like to see it do), it has to have a very good reason.

Or maybe not. Maybe a reason that's merely decent suffices as long as existing code can be brought into conformance with a revised C++ specification in a way that's automatic, fast, cheap, and reliable. If I have a magic wand that allows me to instantly and flawlessly take all code that uses NULL and 0 to specify null pointers and revises the code to use nullptr instead, where's the downside to getting rid of NULL and 0-as-a-null-pointer and revising C++ such that the only way to specify a null pointer is nullptr? Legacy code is easily updated (the magic wand works instantly and flawlessly), and we don't have to explain to new users why there are three ways to say the same thing, but they shouldn't use two of them. Similarly, why allow overriding functions without override if the magic wand can instantly and flawlessly add override to existing code that lacks it?

The eggs in C++ that I want to break are the old ways of doing things--the ones the community now acknowledges should be avoided. NULL and 0-as-a-null-pointer are eggs that should be broken. So should variables with implicit indeterminate values. list::remove and forward_list::remove need to go, as do overriding functions lacking override. The newer, better eggs are nullptr, variables with indeterminate values only when expressly requested, list::erase and forward_list::erase, and override. 

All we need is a magic wand that works instantly and flawlessly.

In general, that's a tall order, but I'm willing to settle for a wand with limited abilities. The flawless part is not up for negotiation. If the wand could break valid code, people could die. Under such conditions, it'd be irresponsible of the Standardization Committee to consider changing C++ without the above-mentioned very good reason. I want a wand that's so reliable, the Committee could responsibly consider changing the language for reasons that are merely decent.

I'm willing to give ground on instantaneousness. The flawless wand must certainly run quickly enough to be practical for industrial-sized code bases (hundreds of millions of lines or more), but as long as it's practical for such code bases, I'm a happy guy. When it comes to speed, faster is better, but for the speed of the magic wand, good enough is good enough.

The big concession I'm willing to make regards the wand's expressive power. It need not perform arbitrary changes to C++ code bases. For Wand 1.0, I'm willing to settle for the ability to make localized source code modifications that are easy to algorithmically specify. All the examples I discussed above satisfy this constraint:
  • The wand should replace all uses of NULL and of 0 as a null pointer with nullptr. (This alone won't make it possible to remove NULL from C++, because experience has shown that some code bases exhibit "creative" uses of NULL, e.g., "char c = (char) NULL;". Such code typically depends on undefined behavior, so it's hard to feel too sympathetic towards it, but that doesn't mean it doesn't exist.)
  • The wand should replace all variable definitions that lack explicit initializers and that are currently not zero-initialized with an explicit initializer of void. 
  • The wand should replace uses of list::remove and forward_list::remove with uses of list::erase and forward_list::erase. (Updating the container classes to support the new erase member functions would be done by humans, i.e. by STL implementers. That's not the wand's responsibility.)
  • The wand should add override to all overriding functions.
Each of the transformations above are semantics-preserving: the revised code would have exactly the same behavior under C++ with the revisions I've suggested as it currently does under C++11 and C++14.


The magic wand exists--or at least the tool needed to make it does. It's called Clang. All hail Clang! Clang parses and performs semantic analysis on C++ source code, thus making it possible to write tools that modify C++ programs. Two of the transformations I discussed above appear to be part of clang-tidy (the successor to clang-modernize): replacing NULL and 0 as null pointers with nullptr and adding override to overriding functions. That makes clang-tidy, if nothing else, a proof of concept. That has enormous consequences.

Revisiting Backward Compatibility 

In recent years, the Standardization Committee's approach to backward compatibility has been to preserve it at all costs unless (1) it could be demonstrated that only very little code would be broken and (2) the cost of the break was vastly overcompensated for by a feature enabled by the break. Hence the Committee's willingness to eliminate auto's traditional meaning in C and C++98 (thus making it possible to give it new meaning in C++11) and its C++11 adoption of the new keywords alignas, alignof, char16_t, char32_t, constexpr, decltype, noexcept, nullptr, static_assert, and thread_local.

Contrast this with the perpetual deprecation of setting bool variables to true by applying ++ to them. When C++14 was adopted, that construct had been deprecated for some 17 years, yet it remains part of C++. Given its lengthy stint on death row, it's hard to imagine that a lot of code still depends on it, but my guess is that the Committee sees nothing to be gained by actually getting rid of the "feature," so, failing part (2) of the break-backward-compatibility test, they leave it in.

Incidentally, code using ++ to set a bool to true is another example of the kind of thing that a tool like clang-tidy should be able to easily perform. (Just replace the use of ++ with an assignment from true.)

Clang makes it possible for the Standardization Committee to retain its understandable reluctance to break existing code without being quite so conservative about how they do it. Currently, the way to avoid breaking legacy software is to ensure that language revisions don't affect it. The sole tool in the backward-compatibility toolbox is stasis: change nothing that could affect old code. It's a tool that works, and make no mistake about it, that's important. The fact that old C++ code continues to be valid in modern C++ is a feature of great importance to many users. It's not just the pacemaker programmers who care about it.

Clang's contribution is to give the Committee another way to ensure backward compatibility: by recognizing that tools can be written to automatically modify old code to conform to revised language specifications without any change in semantics. Such tools, provided they can be shown to operate flawlessly (i.e., they never produce transformed programs that behave any differently from the code they're applied to) and at acceptable speed for industrial-sized code bases, give the Standardization Committee more room to get rid of the parts of C++ where there's consensus that we'd rather not have them in the language.

A Ten-Year Process

Here's how I envision this working:
  • Stage 1a: The Standardization Committee identifies features of the language and/or standard library that they'd like to get rid of and whose use they believe can be algorithmically transformed into valid and semantically equivalent code in the current version or a soon-to-be-adopted version of C++. They publish a list of these features somewhere. The Standard is probably not the place for this list. Perhaps a technical report would be a suitable avenue for this kind of thing. 
  • Stage 1b: Time passes, during which the community has the opportunity to develop tools like clang-tidy for the features identified in Stage 1a and to get experience with them on nontrivial code bases. As is the case with compilers and libraries, the community is responsible for implementing the tools, not the Committee.
  • Stage 2a: The Committee looks at the results of Stage 1b and reevaluates the desirability and feasibility of eliminating the features in question. For the features where they like what they see, they deprecate them in the next Standard.
  • Stage 2b: More time passes. The community gets more experience with the source code transformation tools needed to automatically convert bad eggs (old constructs) to good ones (the semantically equivalent new ones).
  • Stage 3: The Committee looks at the results of Stage 2b and again evaluates the desirability and feasibility of eliminating the features they deprecated in Stage 2a. Ideally, one of the things they find is that virtually all code that used to employ the old constructs has already been converted to use the new ones. If they deem it appropriate, they remove the deprecated features from C++. If they don't, they either keep them in a deprecated state (executing the moral equivalent of a goto to Stage 2b) or they eliminate their deprecated status. 
I figure that the process of getting rid of a feature will take about 10 years, where each stage takes about three years. That's based on the assumption that the Committee will continue releasing a new Standard about every three years.

Ten years may seem like a long time, but I'm not trying to optimize for speed. I'm simply trying to expand the leeway the Standardization Committee has in how they approach backward compatibility. Such compatibility has been an important factor in C++'s success, and it will continue to be so.

One Little Problem

The notion of algorithmically replacing one C++ construct with a different, but semantically equivalent, construct seems relatively straightforward, but that's only because I haven't considered the biggest, baddest, ruins-everythingest aspect of the C++-verse: macros. That's a subject for a post of its own, and I'll devote one to it in the coming days. [The post now exists here.] For now, I'm interested in your thoughts on the ideas above.

What do you think?

Saturday, October 31, 2015

Effective Modern C++ in Korean!

The latest translation to reach my door is another two-color version, this time in Korean. Knowing no Korean, I can't assess the quality of the translation, but I can say that during the translation process, the Korean publisher found an error in the index. That's a rare event—one that indicates that the translator and publisher were paying very close attention. I take that as a good sign.

I hope you enjoy EMC++ in Korean.


PS - O'Reilly and I fixed the indexing error in the latest release of the English edition of the book, so it's not just Korean readers who will benefit from the book's newest translation.