Showing posts with label Excel. Show all posts
Showing posts with label Excel. Show all posts

Jan 22, 2019

The Ultimate Comparative Screwjob Calculator for translation rates

Some years ago I put out a number of little spreadsheet tools to help independent translators and some friends with small agencies to sort out the new concepts of "discount" created by the poisonous and unethical marketing tactics of Trados GmbH in the 1990s and adopted by many others since then. One of these was the Target Price Defense Tool (which I also released in German).

The basic idea behind that spreadsheet was the rate to charge on what looked to be a one-off job with a new client who came out of nowhere proposing some silly scale of rate reductions based on (often bogus and unusable) matches. So, for example, if your usual rate was USD 0.28 per word and that's what you wanted to make after all the "discounts" were applied, you could plug in the figures from the match analysis and determine that the rate to quote should be USD 0.35, for example.

Click on the graphic to view and download the Excel spreadsheet
Fast forward 11 years. Most of the sensible small agencies run by translators who understand the qualities needed for good text translation are gone, their owners retired, dead or hiding somewhere after their businesses were bought up and/or destroyed by unscrupulous and largely incompetent bulk market bog "leaders" with their Walmart-like tactics. Good at sales to C-level folk, with perhaps a few entertaining "inducements" on the side, but good at delivering the promised value? Not so much in cases I hear. And many of the good translators who haven't simply walked away from the bullshit have agreed to some sort of rate scale based on matching (despite the fact that there is no standard whatsoever on how different tools calculate these "matches" and now with various kinds of new and nonsensical "stealth" matches being sneaked in with little or no discussion).

So now, it's not so much whether a translator will deal with a given rate scale for a one-off job, but more often what the response should be to a new and usually more abusive rate scale proposed by some cost- and throat-cutting bogster who really cares enough to shave every cent that an independent translator can be intimidated to yield, thus destroying whatever remaining incentive there might be to go the extra mile in solving the inevitable unexpected problems one might find in many a text to translate.

And this, in fact, was the question I woke up to this morning. I told the friend who asked to go look for my ancient Target Price Defense Tool, but I was told that it wasn't helpful for the case at hand. (It actually was, but because of the different perspective that wasn't immediately obvious.)

Click on the graphic to view and download the Excel spreadsheet
So I built a new calculation tool quickly before breakfast which did the same calculations but in a little different layout with a somewhat different perspective: the Comparative Screwjob Calculator (screenshot above), because really, the point of these match scales is to screw somebody.

Shortly after that, I was asked to include the calculations of "internal matches" from SDL Trados (which are referred to as "homogeneity" in the memoQ world, stuff that is not in the translation memory but where portions of text in the document or collection of documents have some similarity based on their character strings - NOT their linguistic sense). And of course there are other creatively imagined matches in some calculation grids - for subsegments in larger sentences (expect to get screwed if an author writes "for example" a lot) or based on some sort of loser's machine pseudo-translation algorithms that some monolingual algorithm developer has decided without evidence might save the translator a little effort - cut that rate to the bone!). So I expanded the spreadsheet to allow for additional nonsense match rate types ("internal/other") and to compare a third grid which can be used, for example, to develop a counterproposal if you are currently billing based on an agreed rate scale and a new one is proposed (all the time keeping in view how much you are losing versus the full rate which might very well be getting charged to the end customer anyway).

Click on the graphic to view and download the Excel spreadsheet
The result was the Ultimate Comparative Screwjob Calculator (screenshot above). Now that's probably too optimistic a name for it, because surely those who think only of translators as providers of bulk material to be ground up for linguistic sausage have other ways to take their kilos of flesh for the delivery mix.

If this all sounds a bit ludicrous, that's because it is. I am a big fan of well-managed processes myself; I began my career as a research chemist with a knowledge of multivariate statistical optimization of industrial processes and used this knowledge to save - and make - countless millions for my employers or client companies and save hundreds of jobs for ordinary people. I get it that cost can be a variable in the equation, because starting some 34 years ago I began plugging it in to my equations along with resin mix components and whatnot.

But the objective I never lost sight of was to deliver real value. And that included minimizing defects (applying the Taguchi method or some other modeling technique or just bloody common sense). And ensuring that expectations are met, with all stakeholders (don't you hate that word? it reminds me of a Dracula movie in my dreams where I hold the bit of holly wood in my hand as we open the coffin of thebigword's CEO) protected. That is something too few slick salesfolk in the bulk market bog understand. They talk a lot of nonsense about quality (Vashinatto: "doesn't matter"; Bog Diddley: "no complaints from my clients who don't understand the target language", etc.). But they are unwilling to admit the unsustainable nature of their business models and the abusive toll it takes on so many linguistic service providers.

So use these spreadsheets I made - one and all - if you like. But think about the processes with which you are involved and the rates you need to provide the kind of service you can put your name to. The kind where you won't have to say desperately and mendaciously "It wasn't me!" because economic and time pressures meant that you were unable to deliver your best work. That goes as much for respectable translation companies (there are some left) as well as for independent service professionals who want to commit to helping all their clientele receive what they need and deserve for the long run.


Dec 5, 2018

Bilingual Excel to LiveDocs corpora: video

A bit over three years ago, I published a blog post describing a simple way to move EUR-Lex data into a memoQ LiveDocs corpus so that the content can be used for matching and concordance work in translation and editing. The particular advantage of a LiveDocs corpus versus a translation memory is that the latter does not allow users to read the document context for concordance hits.

A key step in the import process is to bring the bilingual content in an Excel file into a memoQ project as a "translation document" and then send that content to LiveDocs from the translation files list. Direct import to a LiveDocs corpus of bilingual content using the multilingual delimited text import filter is still not possible despite years of asking the development team for memoQ to implement this.

This is irritating, though in the case of a EUR-Lex alignment which may be slightly out of sync and need fixing, it is perhaps all for the best. And in some other situations, where the content may be incomplete and require filtering (in a View) before sending it to the corpus, it also makes sense to bring the file in as a translation document first to use the many tools available for selecting and modifying content. However, in many situations, it's simply a nuisance that the files cannot be sent directly to a LiveDocs corpus.

In any case, I've now done a short (silent) video to make the steps involved in this import process a little clearer:


Sep 15, 2015

A quick trip to LiveDocs for EUR-Lex bilingual texts

Quite a number of friends and respected colleagues use EUR-Lex as a reference source for EU legislation. Being generally sensible people, some of them have backed away from the overfull slopbucket of bulk DGT data and built more selective corpora of the legislation which they actually need for their work.

However, the issue of how to get the data into a usable form with a minimum of effort has caused no little trouble at times. The various texts can be copied out or downloaded in the languages of interest and aligned, but depending on the quality of the alignment tool, the results are often unsatisfactory. I've been told that AlignFactory does a better job than most, but then the question of how best to deal with the HTML bitexts from AlignFactory remains.

memoQ LiveDocs is of course rather helpful for quick and sometimes dirty alignment, but if the synchronization of the texts is too many segments off, it is sometimes difficult to find the information one needs even when the (bilingual) document is opened from the context menu in a concordance window.

EUR-Lex offers bi- or tri-lingual views of most documents in a web page. The alignments are often imperfect, but the synchronization is usually off by only one or two segments, so finding the right text in a document's context is not terribly difficult. So these often imperfect alignments are usually quite adequate for use as references in a memoQ LiveDocs corpus. Here is a procedure one might follow to get the EUR-Lex data there.


The bilingual text of a view such as the one above can be selected by dragging the cursor to select the first part of the information, then scrolling to the bottom of the window and Shift+clicking to select all the text in both columns:


Copy this text, then paste it into Excel:


Then import the Excel file as a file for "translation" in a memoQ project with the right language settings. Because of quirks with data access in LiveDocs if the target language variants are specified and possibly not matched, I have created a "data conversion project" with generic language settings (DE + EN in my case as opposed to my usual DE-DE + EN-US project settings) to ensure that data stored in LiveDocs will be accessed without trouble from any project. (This irritating issue of language variants in LiveDocs was introduced a few version ago by Kilgray in an attempt to placate some large agencies, but it has caused enormous headaches for professional translators who work with multiple sublanguage settings. We hope that urgent attention will be given to this problem soon, and until then, keep your LiveDocs language data settings generic to ensure trouble-free data access!)


When the Excel file is added to the Translations file list, there are two important changes to make in the import options. First, the filter must be changed from Microsoft Excel to "multilingual delimited text" (which also handles multilingual Excel files!). Second, the filter configuration must be "changed" to specify which data is in the columns of interest.


The screenshot above shows the import settings that were appropriate for the data I copied from EUR-Lex. Your settings will likely differ, but in each case the values need to be checked or set in the fields near the arrows ("Source language" particularly at the top and the three dropdown menus by the second arrow below).


Once the data are imported, some adjustments can be made by splitting or joining segments, but I don't think the effort is generally worth it, because in the cases I have seen, data are not far out of sync if they are mismatched, and the synchronization is usually corrected after a short interval.

In the Translations list of the Project home, the bilingual text can be selected and added to a LiveDocs corpus using the menus or ribbons.


The screenshot below shows the worst location of badly synchronized data in the text I copied here:


This minor dislocation does not pose a significant barrier to finding the information I might need to read and understand when using this judgment as a reference. The document context is available from the context menu in the memoQ Concordance as well as the context menu of the entry appearing in the Translation results pane.

A similar data migration procedure can be implemented for most bilingual tables in HTML files, word processing files or other data sources by copying the data into Excel and using the multilingual delimited text filter.

Nov 2, 2013

Games freelancers translate

ames are no longer a big part of my world despite years spent collecting, playing and developing them ages ago. In the world of translation, I am an interested observer, fascinated a little by the technical peculiarities I hear of in that domain as well as what appears to be a diversity of opinion and working methods even greater than one finds in my familiar areas of work.

I always enjoy a close look at the working processes of colleagues and clients; often I learn new things from the observation, and I like to ask myself as I see each stage what approach I might take or whether there are changes in the available tools which might make a process more efficient.

An Italian freelance team (leader?) put together a series of seven YouTube videos showing how jobs are prepared and distributed, as well as some particulars of their translation process and QA. The main working tool is Kilgray's memoQ - one of the 6-series versions it seems - as well as the Italian version of Dragon Naturally Speaking and Apsic Xbench, which also make a brief appearance. Altogether 22 minutes of show and tell, which I find mostly interesting and recommend as a nice little process overview.

I've made a YouTube playlist compilation here so it is easier to view the clips in sequence, since I had a little trouble navigating them myself in the somewhat random YouTube suggestion menus. I'm not embedding them here, because the interface for navigating a playlist is much easier to cope with on YouTube itself.

I wish there were more overviews like this available for common translation workflows in other areas as well, such as patent translation, financial report translation in the midst of the "silly season", web site translation or just about anything else. It's doubtful that any of these would betray great trade secrets, but they might offer clients and prospects a little more realistic view of what some might think involves little more than "retyping in another language".

Some content notes on the individual videos of the playlist:

#1 Discusses background research and style guides in the team's approach

#2 Covers the import of the source files (Excel) and the selection of ranges

#3 Term extraction

#4 Statistics, handoff packages and sending out the jobs with the project management system

#5 Creating views of multiple files, voice recognition in Italian, concordances and term lookups

#6 Receiving translated project packages; text to speech reviewing!

#7 QA in memoQ, export to XLIFF for final QA in Apsic Xbench

Oct 28, 2013

Want a revolution? Try memoQ 2013 Release 2.


OK, so I'm exaggerating a bit. And even though the new version of memoQ was officially released today by Kilgray, it really is still beta software. But damned good beta. I expect that there will be more of interest to individual translators added in this version of memoQ than in any other version I've seen up to now. Lots of T's to cross and i's to dot still, but there is great promise, and it's worth having a look now at the future of memoQ.

I'm not talking about changes to the memoQ Server. There are lots of those in this version, and for a change many of them actually seem to be helpful to translators working on the server and less focused on slicing and stuffing linguistic sausage faster like many of the 6.x server features introduced. The rollout webinar with István Lengyel and Florian Sachse of Kilgray showed enough of why memoQ Server users should be pleased. But they could have filled the hour and three quarters with nothing but presentations of new or improved functions for the rest of us and still not run out of material. Since I still have a project to finish tonight, I'll just hit a few of the highlights that I'll probably return to later as the features stabilize and are truly ready for productive work.

Language recognition
memoQ now intelligently recognizes the language(s) of the source text. This is a small convenience in setting up projects perhaps, but for those occasions when a source language has many passages in another language or more than one other language, these other language segments can be identified automatically, copied source to target and locked. I can think of more than a few patent dispute translations where this would have been helpful.

Startup Wizard
A new feature under the Help menu gives a quick, friendly guided tour of important settings that are often overlooked that are hard to find for new users and many experienced ones. This is actually one of my favorite new features and possibly the best help I've seen yet for making a better start with the software.

Better Microsoft Word spelling integration
Custom dictionaries can now be imported from Microsoft Word with greater ease. Users can now also choose Microsoft Word for dynamic marking of possible spelling errors (unknown words). This is a good thing for those of us who hate Hunspell. Oh, and those pesky doubled words are caught now.

More stuff with Microsoft Word...
like exporting tracked changes between translation versions to a DOCX file (sans formatting I think), exporting target comment to a DOCX file (alas! in writing the specification Kilgray failed to consider that one might want to select which comments get exported and possibly suppress all the comments, but I'm told this will be remedied quickly), font substitution in DOCX files (this was a major WTF feature for me, but if I understood correctly, there is some way I can use this to protect text formatted a certain way, such as code in a programming guide - if that's true, this is cool) and...

the TM lookup tool,
an external application which runs in Microsoft Word and any other environment and allows you to look up text copied to the Clipboard in selected memoQ TMs. Too bad they didn't include termbases in this new feature. Yet.

New filters and processes
like direct import of InDesign files with a preview using the free online Language Terminal integration, Adobe InCopy and some file formats that must be pretty damned geeky because I've never heard of them.

Why am I excited about
a plain text view which is about as exciting as lukewarm, unspiced pea soup. Well, because it's absence has been driving me nuts for years now. It's in this version.

Meanwhile, back at the termbase
great things are happening with new import options that are still a wee bit buggy but will get very good very soon. Until now memoQ could only import terms as TMX and delimited text. New options include Excel (at last!), MultiTerm XML and TBX. It was child's play for me to tweak a couple of TermStar MARTIF exports from STAR Transit to import those terms, because TBX is a dialect of MARTIF and STAR's MARTIF is very close to TBX. Extra effort? About 2 minutes of search and replace so I'm hoping Kilgray will go the extra five yards and touch this import option down.

The addition of the MultiTerm XML import option means that memoQ users can now roundtrip data from memoQ to partners using SDL MultiTerm and back for termbase updates. Unfortunately at the moment, the only meta data transferred in the import is the definition field, but efforts are in progress to support at least the MultiTerm fields memoQ exports to XML with Kilgray's own definition. That was simply forgotten at specification time (oops). But still, this will be serious headache relief for those of us who work in teams with SDL Trados users and want to share terminology in the most effective ways.

Is that all?
No. This new version of memoQ is like a very messy Christmas where one can easily lose the overview of hat's under the tree with all the wrapping paper and bits of ribbon cluttering the floor. As it gets cleaned up, we'll all notice a good bit more, and I suspect that Santa's Hungarian and German helpers will be slipping a few more things under the tree that they might forget themselves until some user trips over them. There has been so much effort put into consolidation and improvement of existing features that it's simply too much to keep track of. I've made a list and checked it more than twice and still find things to add. But I'll end with another look at something I've already blogged about, that groundbreaking

Monolingual Project and TM Update
with edited files in any target format. It still has a lot of little quirks, especially with some formats, but here I expect a lot of improvements. I've made a little demonstration video and put it on YouTube; it shows the reimport of edited translations to update the translated file and the TM in memoQ, and it shows two different ways to look at tracked changes before revealing the dark secret of Row History Recovery which I think Kilgray didn't realize was possible. Well, damnit, they should have made it a feature with a button anyway.

 
(View this in full screen mode by clicking the icon at the lower right of the video window.) 

Oh yes, and one more cool little thing about this release that I forgot to mention...

... the quickstart shortcut to creating memoQ projects
in the context menu by right-clicking on a file. I'm not much into single-file projects any more and prefer to use "container" projects for customers or categories instead, but it's still a nice little addition that can save time once in a while:



Aug 16, 2013

SDL Trados Studio vs. memoQ: Translating Text Columns in Excel

Paul Filkin of SDL recently showed a few "little known gems" of SDL Trados Studio 2011 in one of his recent blog posts, which is quite useful for learning how to approach some not-so-rare project challenges with Trados Studio. Here I would like to share his video tutorial about one of those gems - how to translate multiple columns of text in an Excel file. (HINT: these embedded videos are easier to watch if you do that in full screen mode by toggling the icon at the lower right of the play window.)



memoQ can also translate Excel files with a similar structure, and here's how to do that, with a little bit of Dragon Naturally Speaking thrown in just for fun:




Jun 1, 2013

Translating multilingual Excel files in memoQ

Some weeks ago on a Friday, in the late afternoon, I received one of those typical Friday project inquiries: a request for a fast response on whether I would like to translate some 15 to 20 Excel files distributed in three folders, with file names redundant between the folders and over half the 50,000 source words already translated. My translations were, of course, to take the previous work into consideration and remain consistent with it. No translation memory resources were available. Fortunately for my blood pressure, I was offline that afternoon until after business hours. When I saw the request later that evening, I considered what sane approach there might be to such a project, and when none occurred to me at the time, I wrote a note to the project manager requesting more information about the source data, received no response and forgot the whole business as the usual Friday nonsense.

About a week later, while I was engaged in something completely different, it occurred to me that it would have been a fairly straightforward matter to translate the remaining text scattered through those files and build a reference translation memory from the existing translations. In fact, I could even use the available translations from other languages as references in a preview. How? By using a multilingual filter option that Kilgray added to memoQ version 6.2 (with build 15).

Finding that option is not exactly intuitive. I had heard about it but had not followed the discussions closely in the online lists, nor could I remember it from the online demonstrations I had viewed in recent months. But I knew that it worked with Excel files, so I started to import such a file and looked for the proper settings to import the source and target columns. And found nothing.

Fortunately, I used to be a software developer, so I put on my old developer's thinking cap and considered how I might best mystify users with a new feature. Aha! Name the feature something completely different! So I looked again at the list of import filters for my Excel file and found a likely candidate, the multilingual delimited text filter. (To use this filter, you must Import with options.)


The first page of the settings dialog for that filter offers Excel as one of the base formats to import:


The columns can be specified by marking the Simple bilingual configuration option, or with somewhat less confusion by examining the options on the Columns tab of the dialog. For the following test file with English source text and German as the desired target language


I used the following settings for the import:


After a little experimentation, I found that I could specify the third language (Portuguese) as a translation even though it was not a language indicated in the project. (Additional target languages are only possible with the PM edition of memoQ, but this information can be designated as a comment if needed in the Translator Pro edition.) This added the Portuguese translations (where available) to the preview in my working window:


Some odd property of the import filter in the version of memoQ tested caused the source text to be copied to the view of unpopulated translations of the project's target language in the preview, but that is of no real consequence. The preview, unlike a typical preview of an Excel file, bears no resemblance to the layout of the source file, but is instead organized by the source text grouped with other specified columns.

Considering how often I have encountered Excel files and other sources structured like this in the past decade, I would say this is probably one of the most useful filters that has been added to memoQ recently. More complex data structures may require cell background colors to be used to exclude unwanted parts of a spreadsheet (colors can be added while configuring the import). It's a shame that the current version of the filter doesn't support ranges or conditions for exclusion, but perhaps that will come later.

Making a translation memory from the existing translations in the file (which were locked and confirmed upon import in the example shown above) is a simple matter of using the command Operations > Confirm and update rows... and selecting the appropriate options. For the example shown here, selecting the locked rows would write all these to the primary translation memory:


Kilgray has a blog post and a recorded webinar (47 minutes long) with further details about using this filter. They state that "This webinar was designed for language service providers and enterprise users managing multilingual projects." However, given the frequency with which many freelancers encounter such formats and their desire to use other language information, comments, context data, etc. in their translations, I think this feature is just as relevant to freelance translators.

Update 2013-07-25: After a series of recent tests involving imports and segmentation, I wanted to see how the multilingual Excel filter would import data in which individual cells contain multiple sentences or line breaks. Theoretically the segments should correspond to the cell structures, but would they in fact? I decided to import one of my Excel files that I use for segmentation demos. To keep all the content of a cell in one segment with the regular Excel filter, I have to use a "paragraph segmentation" ruleset and set "soft breaks" as inline tags in the filter's import settings. But the default settings of the multilingual Excel filter achieve the same result:


This showed me that the "multilingual" filter might in fact save me time and trouble for importing files from certain customers where I want to avoid segmentation inside the cells altogether. And of course, the multilingual filter is an obvious quick way to load data from an Excel file and, as mentioned above for partial data, send it to a TM - a process which used to involve saving as a CSV file from Excel, worrying about saving as UTF-8, etc. That process might not even work with the test file shown here (I'm not really inclined to try it).


Dec 7, 2012

Terminology collaboration with Google Docs: new twists

A few years ago, I put a notice in this blog about a colleague's interesting use of Google Docs to share terminology with faraway colleagues in a project. Earlier this year I enjoyed a similar collaboration with a Google Docs spreadsheet used to exchange and update terminology on a very time-critical annual report with translators using two different versions of Trados, memoQ and no CAT tools at all.

Sharing information via Google Docs was quite easy, and we were able to configure the access rights without a lot of trouble. But at the time I still had a bit of extra, annoying effort to get the data imported into my working environment for frequent updates.

Tonight another colleague contacted me with basically the same problem. Her client manages data in an Excel spreadsheet, which gets updated and sent out frequently. She already had the idea that this might work better in Google Docs, and I agreed.

But I kept thinking about that annoying update problem....

One can, of course, export Google Docs spreadsheet data in various formats:


I've marked a few of the export ("download") formats which are probably useful for a subsequent import into a translation environment too. But the downloaded data still won't be in the "perfect" format in many cases, and there will be extra steps involved in matching it up to the fields in your term base.

One way to simplify this problem is to create another online spreadsheet in Google Docs and link it to the original, shared spreadsheet. In this second spreadsheet, which is your "personal" copy for use in your favorite tool, you reformat the data so they will export in a form that makes your later import to your tool's termbase easier.

In my case, I use memoQ, so I created a Google Docs spreadsheet with the first row containing the default field names of interest from the CSV export of my memoQ termbase:

I linked the columns in my personal online spreadsheet with the shared spreadsheet using the ImportRange command. It has two arguments, both of which have to enclosed in quotes. The first one (argument #1 above) is the key for the online spreadsheet to be referenced; it is shown in the URL of the online spreadsheet (just look in the address bar of your browser and you will see it). The second one specifies the sheet and the range of cells to copy. I put this formula in one cell and it copied the entire column for me.

I could, if I wanted to, use conditional (IF) statements and other tricks to transform some data in columns of the other sheet and build the semicolon-delimited term properties list (Term_Info) that memoQ uses to keep track of gender, capitalization enforcement, forbidden status, etc. But none of that is needed for simple sharing of terms, definitions and examples for instance.

I simply export my personal Google Docs spreadshit as CSV, then import it into my desired termbase in memoQ. If I have IDs set for the term entries in the online spreadsheet, I could even choose ID-based updates of my local termbase when I do the import.

Those who use other tools, such as Trados, OmegaT or WordFast can set up their spreadsheets and do exports as best suits their needs.

This approach enables you to take source data in nearly any format in an online spreadsheet and rework it for the greatest convenience in the tool of your choice. Although not a "perfect" solution, it is perhaps a convenient one until better resources are commonly available for dynamic, cross-platform translation collaboration.

So what do I recommend my friend to try as a first step? Maybe take the client's latest spreadsheet, copy and paste it into Google Docs and share it with the client and others on the team. Then it's already "up there" for everyone's convenience (local XLSX copies can be downloaded any time), and she can get on with creating a convenient "view" of this shared data in her personal spreadsheet, which can be exported for local use any time. That personal sheet could also be shared (read only access recommended) with other team members using the same translation environment tool.

Nov 24, 2012

Combining sublanguages in memoQ terminology

The termbase structure of memoQ is limited with respect to its information fields; users looking for or wanting to add fields for where a term was found or for its sublanguage (as a term property) will be disappointed. The term source has no field of its own and memoQ currently does not allow new fields to be added; the sublanguage is handled at the term level (which makes sense, I suppose, if one does adaptation projects from one sublanguage to another). But a memoQ termbase is quite flexible when it comes to adding language and sublanguages; it can include any number of these.

But this flexibility can have cause some problems in term management for busy translators in some pairs. It's nice that UK English terminology will display in my project with US English set as the target language. But when I want to export a simple list of all English and all German words in a termbase, for example - and there are synonyms in the entry as well, the resultant delimited output is very confusing for many people. And if you decide that you want to move all the terms to "generic" English, things can get really messy.

I faced exactly that problem just last week. I had been maintaining the termbase for an end client for about a year, starting with a rather large in-house terminology I was given in an Excel file. I imported it to memoQ as generic German and started working with the target language set to generic English. After a while the customer pointed out that UK English was their "standard". I swallowed hard and set my template project to UK English, pointing out that they would still be getting English with a rather American character from me no matter what I did with the spellchecker. Then later when I found out about the bug in SDL Trados where XLIFF files are difficult to import if the sublanguages are not specified, I set the source language of that customer's project to German (Germany).

After that they opened a subsidiary in the US and suddenly my native language variant was the new "standard". Ha, ha. I now had a term base with five languages: two flavors of German and three flavors of English. Consolidation time!

But how does one do this? memoQ itself is unhelpful in this regard; it barely offers any management features for terminology, much less tricks like merging sublanguages.

It's not that hard, really, and the steps depend on what data you actually keep in the termbase. If you never enter definitions for your terms things are pretty easy.

To start consolidating your terms by combining the sublanguages, first export a fully specified delimited text file. That's a "CSV" file with all the defaults.

Let's have a look at how that file is structured:

If you open the exported data in a spreadsheet, the first group of columns, 12 (A-L) in my case, were concept level fields. memoQ refers to a concept in the termbase structure as an "entry". Each entry can have any number of languages, with each sublanguage counting as an "independent" language. Related sublanguages are grouped.

A given language or sublanguage can have any number of terms, which are treated as synonyms. Why are they synonyms? Because in the term structure they share the same definition. If you have three language variants for English like I did, you can have three definitions. The definition fields are the first problem source.

Each entry for a language or sublanguage has three fields: the actual term (word, phrase), an example (which is what you wrote in the "usage" field), and "term info" which is a bunch of other meta data for capitalization rules, matching, gender, forbidden status, part of speech, etc.


If there are no definitions worth keeping in the extra variants of the language you want to combine, just delete all the extra definition fields. Leave the first one. In my case last week, I left English_Def and deleted English_United_Kingdom_Def and English_United_States_Def. Then I renamed all the English_United_Kingdom and English_United_States columns to English. I made similar changes for the German variants. Then I saved the file and imported it to a new termbase in memoQ. Problem solved. All three English types were combined as "English" and the German variants as "German". Done.

If I have definitions that I want to keep, there is a little more complication to avoid losing data. My quick and dirty solution was to create a temporary column for each major language in which I combine all the definitions for the language's variants. I do this by using Excel's concatenation function, something like this:

=CONCATENATE(N2, IF(U2 = "","", " / "),U2,IF(Y2 = "",""," / "),Y2)

In my case N was the definition column for English, U the definitions for UK English and Y for US English.

I renamed my merge columns as English_Def and German_Def and deleted the names of the original definition columns then saved the data as Unicode text (UTF-8, the memoQ default to avoid potential problems with mapping characters on re-import of the data to memoQ). After import, a quick look at my test data confirmed that no data was lost and all the language variants were combined as one major language:


Obviously a little editing is still desirable - entry #3 shows some duplication because two English variants contained the same term; my sloppy conditional statement also left a few leading slashes for cases where the first definition column was empty and a later one for another variant of the same language was not. But that's not a big deal; one should always have a careful look at data after doing something like this.

Sep 16, 2012

Equivalent Rates in Translation Billing

This article appeared in its original form in September 2008 on an online translation portal. It has been moved here for better maintenance of the content and links. The original text has been modified  slightly and the link to the quotation tool updated. The rate equivalence calculator is part of the Sodrat Suite for Translation Productivity. An older post on this topic includes links to other tools for calculating rate equivalency.


*****

Billing questions are seen very frequently in the forums of popular translator portals or professional association sites. Often these concern whether translations should be billed by the word or agglomerated words (thousands or hundreds), lines, pages or other units in the source or target text. The answers to these questions reveal a wide range of approaches, which are often dictated by local convention, habit or simple fear. Often it is claimed that a particular method "does not make sense" because of compound word structures or other issues (which is frequently claimed for my language pair, German to English, for example). These claims are interesting, because other successful translators with the same language combinations often view things very differently. Who is right?

Before answering that question, let’s ask what sort of an answer can be trusted most. Would you stake your business on a “gut feeling” recommendation of another translator or translators when many others argue just as vehemently without evidence? Or would you feel more secure using a mysterious online calculator on an agency web site which purports to be based on a large body of text in the respective languages? Or would you prefer to see hard numbers based on your own translations that you have carried out in a number of different fields for various customers? Which approach do you think would give you the best basis for making your business decisions and protecting yourself against getting shortchanged in pricing?

I prefer to deal with real data from my own work. It not only reflects what I have been translating but also what I am likely to be translating in the next year or two. If I make a radical change I can quickly check and make sure that my pricing model is still “safe”.

To answer my own pricing questions, I created a spreadsheet in Microsoft Excel, which allows me to enter the actual data from individual projects and see what the relationship between target and source text pricing would be for words, lines and pages. You can have a copy of this spreadsheet to use yourself by downloading it here.

Because the true measure of price appropriateness is reflected in what one actually earns for the time spent working, a tracking calculator for the hourly earnings on each job was included in that spreadsheet to serve as a "control", but here I would like to focus on the relationship between different unit costing approaches and how to “adjust” prices to the units your prospect or customer is most comfortable with.

When I entered actual data from two types of translation work that I do (one specialized, the other a general category), I discovered some interesting things. I set up my spreadsheet to calculate the relative standard deviations (RSD) of the data, and what I found was that these were generally under 5%. What does this really mean? It means that if the data for your translation jobs is normally distributed, about 60% of the time the “actual” price relationship between two methods will fall in a “band” of plus or minus one RSD around the average, 95% of the time the actual relationship will be plus or minus two RSDs and 99% of the time it will be plus or minus 2.5 RSDs.

Still confused? Here’s a specific example from the downloadable spreadsheet:

If I want to earn € 0.15 per target word for a chemistry translation but the customer I am dealing with wants me to bill in source lines (allowing a “fixed price” to be calculated in advance – line rates are also common in Germany), I enter my desired rate in the little calculator table in cell B20. In cell E20 I see that I need to charge about € 1.27 per source line of 55 characters. How reliable is that figure? The relative standard deviation of the source line to target word ratio is just under 5% (some versions of the calculating spreadsheet in circulation do not include this figure, but it is accurate). This means that 99% of the time if you use this pricing, the “worst “real target word rate you achieve will be about 13.1 euro cents and the “best” you’ll do will be about 16.9 euro cents. If you work consistently with this pricing strategy your average earnings will be 15 euro cents per word. In many cases the bandwidth of variation will be much narrower than the example I have presented here. Compile your own data and see.

But what about the “exception” the fearful translator might say? One indicator of trouble for my language pair if I am using a source word pricing strategy might be an unusually low source word to source line ratio, which would indicate the likely presence of very long compound words in German. What do you do in a case like this? Raise your price if you feel like it. Use real data to show that this text really is different and must be priced differently from other work. Not everyone will agree with this idea, but you may have more success with it than you would expect. The important point here is that, by tracking actual data from your own work, you have a much clearer understanding of when rates may need adjusting. 

When you examine your own data you will find that the actual variation in earnings between the calculation methods presented is small in most cases, at least for European languages. If this is not the case for your language, then you will have hard numbers to use in your quotation. Negotiations based on fact often work better, though all this can be greatly outweighed or offset by psychological factors.

By using the rate equivalence spreadsheet or creating your own similar tool, you can navigate the hazards of various quotation methods with greater confidence, quickly determining equivalent rates in the units expected by prospects and customers. This will ensure that you reach your average earning targets and achieve the same average hourly earnings as with your familiar unit pricing. You’ll know how much you have to raise your word price or how much margin you have to reduce it if you are asked to quote by source word instead of target word or vice versa. Now get down to business.

The Sodrat Suite: delimited text to MultiTerm

The growing library of tools in the Sodrat Suite for Translation Productivity now includes a handy drag & drop script sample for converting simple tab-delimited terminology lists into data which can be imported directly into the generations of (SDL) Trados MultiTerm with which we've been blessed for more than half a decade.

Many people rightly fear and loathe the MultiTerm Convert program from SDL and despite many well-written tutorials for its use, intelligent, competent adult translators have become all too frequent callers on the suicide hotline in Maidenhead, UK.

Thus I've cast my lot with members of an Open Source rescue team dedicated to squeezing a little gain for the victims of all this pain and prescribing appropriate remedies for what ails so many of us by developing the Sodrat Software Suite. The solutions here are quick, but they aren't half as dirty as what some pay good money for.

The script below is deliberately unoptimized. It represents less work than drinking a cup of strong, hot coffee on a cold and clammy autumn morning. Anyone who feels like improving on this thing and making it more robust and useful is encouraged to do so. It was written quickly to cover what I believe is the most common case for this type of data conversion. An 80 or 90% solution is 100% satisfactory in most cases. Copy the script from below, put it in a text file and change the extension to VBS, or get the tool, a readme file and a bit of test data by clicking the icon link above.

To run the conversion, just put your tab delimited text file in the folder with the VBS script and then drag it onto the script's icon. The MultiTerm XML import file will be created in the same folder and use the name of the original file with terms as the basis of its name.

Drag & Drop Script for Converting Tab-delimited
Bilingual Data to MultiTerm XML

ForReading = 1
Set objArgs = WScript.Arguments
inFile = objArgs(0) ' name of the file dropped on the script

Set objFSO = CreateObject("Scripting.FileSystemObject")
Set objFile = objFSO.OpenTextFile(inFile, ForReading)

' read first line for language field names
strLine = objFile.ReadLine
arrFields = Split(strLine, chr(9))

outText = "          "UTF-16" & chr(34) & "?>" & chr(13) & "" & chr(13)   
   
Do Until objFile.AtEndOfStream
 strLine = objFile.ReadLine
 if StrLine <> "" then
  arrTerms = Split(strLine, vbTab)
   
  outText = outText & "" & chr(13)
      for i = 0 to (UBound(arrTerms) )
        outText = outText & chr(9) & "" & chr(13) & chr(9) & chr (9) _
                   & "" & chr(13)
        ' write the term
        outText = outText & chr(9) & chr (9) & chr (9) & "" & _
               arrTerms(i) & "
" & chr(13) & chr(9) & "
" & chr(13)
      next
  outText = outText & "
" & chr(13)
 end if
Loop

outText = outText & "
"
objFile.Close
outFile = inFile & "-MultiTerm.xml"

' second param is overwrite, third is unicode
Set objFile = objFSO.CreateTextFile(outFile,1,1)
objFile.Write outText
objFile.Close


Jul 27, 2012

Translating "foreign" bilingual tables in memoQ

--- In memoQ@yahoogroups.com, Liset Nyland wrote:
> A client has sent me a 2-column rtf-file export from DVX.
> It looks similar to the MemoQ export but not quite.
>
> The target column is full of fuzzy matches, so I need to recover these.
...
> ... do you know if there's a bilingual format exported from DVX that can
> be loaded and translated directly in MemoQ?
There is one way to deal more-or-less directly with the DVX bilingual RTF tables - or any others being introduced by other providers or bilingual tables that some customers are fond of using to store translation strings or other content. I would love to see a general import routine from Kilgray that allows selection of source and target columns of various file types in a dialog, but until then...
1. Get a copy of the PlusToyZ macros by German/English to Ukrainian/Russian translator Arkady Vysotsky.
2. Copy the source and column targets into a separate RTF or MS Word file.
3. Run the PlusToyZ macro to convert that to a Trados-like bilingual (the old Wordfast/Trados RTF/DOC bilingual)
4. Import the converted file to memoQ using the default filter, which is intelligent enough to recognize that you are dealing with Trados-compatible bilingual DOC/RTF.
5. Translate, edit, feed the TM, etc.
6. Export the processed file.
7. Use the appropriate conversion macro in PlusToyZ to turn the data back into a table.
8. Paste the data back into the original bilingual table from DVX or whatever tool it came from.
This is the preferred method to use when your bilingual table is partially pretranslated, or you have a translated table you want to edit while having a better look at the source text. This would also be a useful method for jobs I've had where customers have string or terminology lists in Excel to translate that are in some cases incomplete.

Once you get to Step 3, you can translate that bilingual format in any tool which works with the old Trados RTF/Word segmentation, such as WordFast Classic.I think that was actually the reason Arkady wrote those macros in the first place.

If you want to protect the DVX codes (or similar structures, including placeholders) or store them in the TM as proper tags, run the Regex tagger or use a cascading filter a described in my other blog post about regular expressions for DVX external table translation in memoQ. Of course, for content other than DVX tags, a different regular expression will be needed.

Jul 2, 2012

Sometimes one CAT tool is not enough

Not long ago, a colleague in New Zealand expressed her frustration about the limits of interoperability for common translation environment tools and her sense of unfulfilled promises:

In the case she was concerned with, she was quite right. There are workarounds for complex MS Word documents with footnotes, but none of these are really optimal for a team working simultaneously in several different CAT tools. In the case of memoQ 5 (which was part of the mix) the lack of support for footnotes in RTF/DOC bilinguals made it impossible to review an uncleaned translation done in WordFast Classic (not a problem for simpler files), and the use of a bilingual DOC export from memoQ used the "simple" format of one segment per line, thus losing the format for the working translator. I hope that will be dealt with in time by Kilgray's developers.

But fortunately, interoperability really does work - it is "the art of compromise" as one industry guru put it, but there are many acceptable compromise strategies that allow productive collaboration, and memoQ excels in this regard more than any other tool I know. But as I have said so often, we need a broad palette of tools to enable us to handle any job efficiently, and last week's project here was a good example of this.

No good deed goes unpunished, and my punishment for an almost miraculous rescue of the editing and harmonization of a large, complex financial report done in a hurry by several translators, some of whom don't use CAT tools at all, was that I got to do the update of that text and see all the little stuff we missed the first time around when the client CEO and I traded sleep for coffee and Excel spreadsheets. Actually, I loved that job, and I was proud of what we could accomplish in 48 hours that should have taken a week or more of overtime. All of it possible only thanks to memoQ LiveDocs and the QA module. And lots and lots of coffee.

In this round, however, I was determined to avoid some of the pain caused last time by file format problem. The Notes to the annual report contained about 30 embedded Excel tables in a Word document. "So what?" says the user of Star Transit or DVX2. "Uh oh!" say the Trados and memoQ users. This is where interoperability saved me hours of bother.


I'm no longer comfortable doing routine work in my former preferred tool, Déjà Vu. The working environment of memoQ is more ergonomic for me, and although I still miss a number of very useful features in DVX, on the balance, the features I gained in memoQ allow me to do many more things better (or even at all). Nonetheless, this time Atril had the clear advantage.

I translated the main text of the Notes in memoQ, making full use of my translation memories, glossaries and QA settings there. I enjoyed the previews of the embedded Excel documents, which gave me necessary context for some of my work, but the actual content of those tables was untouchable in memoQ. Then I exported the translation, which was an English document with embedded tables in German.

This compound document was then imported to DVX2 together with my TM. I copied the source to target, locked all the English content (it was helpful that the content extracted from the Excel tables was at the end of the translation scroll) and pretranslated what remained from the TM. Less than an hour later I exported the completely finished translation - and saved a lot of fiddly work exporting and importing those stupid tables like I had to do before. I really do hope that memoQ's filters for MS Office documents will be updated to handle embedded objects soon - it's not uncommon that I have Excel, Visio or PowerPoint objects stuck in my Word documents.

After delivering the text, I then turned to the next task: exporting my terminology. Once again, interoperability came to my rescue here. This customer places a lot of importance on the correct use of IFRS and their own terminology. One of the ways we coordinate this is to exchange glossary information in a format that this customer, who doesn't know a CAT tool from a Persian feline, can cope with. A nicely formatted DOCX or PDF dictionary does the trick. But I can't do that with memoQ.

I've been advocating the addition of XSL script selection to memoQ's XML term export for some time now. My own efforts to create good scripts for my purposes are hampered by the fact that I haven't done much programming for a decade now and I've lost most of my skills. So until I sort that problem out, I take the terms in XML from memoQ and import them to SDL Trados MultiTerm. MultiTerm is unique among the terminology tools on the low end of the market in that it has always offered some useful export format templates (which can be adapted) for re-use of the term information in other environments. Formatted RTF dictionaries like the one shown here as a thumbnail, web pages, custom text exports... the sky's the limit if you can deal with the odd configuration options and unexpected crashes. Having traversed that minefield often enough in the past decade, I can usually produce something good-looking from my memoQ terminology with SDL Trados MultiTerm without much ado. And my clients like it a lot more than an ugly CSV export.

So why didn't I just use Déjà Vu or Trados in the first place? Re-read the text above. None of the three CAT tools I use was capable of doing everything I required as efficiently as I needed it done. DVX2 came the closest, but the lack of a preview, the primitive way that tags (codes) are still managed and the lack of comfort I feel translating in that environment (I'm much slower now) made it a poor option for the bulk of the work. But working in carefully planned concert, these three tools produced excellent results, made my client happy and made me happy by saving the rest of my day with an early delivery.

Dec 24, 2011

Converting MARTIF to Excel

This afternoon I received an interesting project of a sort I don't see often - Star Transit. I avoided these like the plague for years, because I dislike Transit as a working environment, though I expect the latest version is probably an improvement over what I used to use. However, since Kilgray created an excellent import routine for Transit packages (PXF files), working on these projects is quite a simple matter. Except for terminology. The memoQ integration currently does not include the Transit terminology.

The client was kind enough to supply MARTIF exports from the Transit dictionaries, but unfortunately that's not an import format for memoQ, though really it should not be that difficult to deal with that XML format (I hope). So I went on the search for a solution and soon discovered a PrAdZ thread in which Czech translator Antonín Otáhal offered a VBA macro for converting the MTF files (MARTIF) to Excel.

The solution works nicely, though in my tests I found it necessary to open the MTF files in Notepad and re-save them as ANSI so the special characters in German would not get trashed. And I hate typing a full file path into the selection dialog, so I modified the code to include a proper file selection dialog. If anyone else can use such a tool, I've made it available here as an XSLM file (a macro-enabled Excel worksheet for MS Office 2010). Improvements are very welcome; I've been out of the programming game too long now to refine this much without investing more time than it is worth to me.

Nonetheless, I'm quite pleased that I can now save a tab-delimited or CSV text file from Excel and import this easily into memoQ or other translation environments. So moving term data from Star Transit to other tools is now a little easier.

To use the tool
  1. Make a copy of the Excel file
  2. Open your working copy of the Excel file
  3. Press Alt+F8 to bring up the list of macros
  4. Select the macro "mtf2excel"

  5. Click the Run button. A file selection dialog will appear, and if everything is OK with the encoding, your term data should appear shortly in the columns of the Excel sheet.

Jun 3, 2010

Dealing with embedded XML and HTML in an Excel file

One of the occasionally gratifying aspects of translation for an IT geek like me is that IT challenges continue to follow me. Actually, that's one of the things about the current state of the profession that I hate too. (I'm a not-so-closeted Luddite.)

This week's challenge was more of a fun puzzle, because it wasn't my problem, but rather someone else's. An agency owner friend sent me an Excel file that was driving him nuts; his localization engineer, a former star at a Top Ten agency, had pronounced the task of filtering the data in a useful way to be impossible. I love it when engineers say something is impossible; it usually means there is a simple solution at hand if one gives the matter a little real thought.

The file structure looked something like this:

Only the yellow columns were to be translated; some had plain text content (with line beaks in some cases), other yellow columns had XML or HTML content.

Just for fun, I fired off a quick support request to Kilgray along with a copy of my test file, because I thought maybe there was a cascading filter feature I might have overlooked. (There isn't, but the idea was noted as a good one, so maybe we'll see it in the future.) In any case, Denis Hay offered a creative suggestion as he almost inevitably does:
Hi Kevin,

While waiting for "cascading filters" (which I also find a great idea), what you could do is simply copy these Excel columns to a Word table, than use either Tortoise Tagger, or preferably the +Tools from the Wordfast website to tag the HTML/XML content. Import that tagged word file into memoQ, and you should get what you wanted.

Once translated, just paste back to Excel.

Kind regards,
Denis Hay
Technical consulting and training
Kilgray Translation Technologies


There's another way I discovered by the time Denis' suggestion arrived. It works well manually, but it can also be automated with macros if you're dealing with content management system exports where the structure recurs and you'll be doing a lot of this.

Do the following:
  1. Copy each individual Excel column of interest (or at least the ones with XML/HTML) into a plain text file.
  2. In the case of the text files with tagged content (i.e. XML or HTML), change the file extension to fit the content (i.e "text2.txt" becomes "test2.xml", etc.).
  3. Translate the text files with your favorite translation environment tool, using the filters appropriate for each type of content.
  4. After exporting the files from your working environment, copy and paste the text file content back into the corresponding columns of the original Excel file. Note taht if there are line breaks somewhere, your row positions may get screwed up. This can be solved by performing this operation in OpenOffice Calc. (Maybe there's an appropriate setting for Excel to avoid this problem, but I don't know it.)
The key to sorting this puzzle out was to consider the discrete parts (i.e. the individual yellow columns) of the entire data set as separate collections of data. Dividing a problem up into its constituent parts is often a good way to find an easy solution.

Feb 14, 2009

Tables for estimating job volumes

Anil Gidwani is a software engineer and German to English translator. Like me, he's fond of analyzing data and finding or creating tools to help him plan his business. One of his recent efforts in this regard involves creating Excel tables to show the relationships between weekly translation volume, word price and income as well as weekly volumes, the number of words per job and the number of jobs per month. A description of his efforts as well as the two tables can be found here.

Such quantitative analysis isn't for everyone, though for those who prefer more than a guess and a prayer in their planning it can certainly be useful. And as "obvious" as the principles and content of these tables are, I never made the effort to build the tables myself, because I usually calculate numbers like that in my head. However, seeing all the data in a well-organized table is more helpful. Thank you, Anil.

Nov 20, 2008

Calculating equivalent rates for translation billing

Many of us who have dealt with an international clientele have encountered different approaches to counting text for charging translations. Charging by the word is probably the most common practice, but in various places one might encounter calculations per hundred words (Australia), thousand words (UK), "lines" (common in the German-speaking countries), "standard" pages or other units. Then, of course, there is the matter of charging by the source text count or by the target text count.

These issues are discussed at great length and with great passion by many translators, some of whom are convinced that only certain methods protect one against being "cheated" with particular language combinations. I can't judge the validity of this belief for every language, but when I hear that opinion expressed for the language pair I work in, I know it is nonsense. In September 2008, I published an article on a translator's portal (since removed and found here) as well as a spreadsheet tool to help inject a little quantitative thinking into the debate.

Careful analysis of various types of documents show that the rates can be converted between all the common methods of calculation with very low standard deviations. Thus if you calculate the conversion factors between different methods (for example source words versus target lines), on the average (i.e. after doing a number of jobs) your earnings will be pretty much the same as if you had calculated using your familiar method. Individual jobs may be a bit more or a bit less, but it's important - unless you are looking at a one-off job for a client whom you will never deal with again - to take a long-term perspective and accommodate client requests if a quotation by a particular method is requested. I usually charge by source lines off 55 characters each (including spaces); if a British customer asks for a quotation in GBP per 1000 words, that's not a problem. (Well, given the rapidly dropping pound it might be, but currency exchange is another kettle of fish altogether. Maybe I'll deal with that another day.)

The Excel spreadsheet I put together is designed to make rate comparisons between two types of texts and to track hourly earnings on individual jobs. After all, what is most important isn't the rate per word/line/page/etc. but how much you earn for a given amount of your time.

Alessandra Muzzi of Amtrad Services in Italy has put together a very nice online fee conversion calculator (as well as a downloadable spreadsheet). This has been around for a number of years and is much more user-friendly than my spreadsheet, but comparisons between text types are more difficult (you have to enter data in two different workbooks) and there is no tracking feature for hourly earnings. Still, for getting a quick idea of how one calculation method can be converted to another, her tools are quicker and easier to use than mine.

Update September 16, 2012: After realizing recently that this tool has been unavailable for a longer period after a domain change, I reviewed it for current relevance and decided to add it to the growing Sodrat Suite for Translation Productivity, part of an Open Source resistance movement to the abusive complexities of ill-conceived technology in the translation profession.

Nov 16, 2008

The "Target Price Defense Tool" (updated)

Some time ago, after reading the millionth online discussion of the evils of CAT discount schemes and how to counteract them, I decided to add a more quantitative tone to the discussion. I do not share the blanket objections that some have to discount schemes of any sort; if I have an easy text to do that consists of 50% repetition, I am open to discussing the rate for the repeated content. However, I do find some of the CAT schemes proposed by agencies to be beyond ridiculous, and anyone who is serious about paying nothing or a few percent for matches and repeats will be greeted with a hearty, spontaneous laugh for starters.

Clearly, in many cases there is a need to look closely at proposed CAT schemes and consider alternative schemes or word price increases to achieve fair overall compensation. With that in mind, I created an Excel spreadsheet I call the Target Price Defense Tool to help translators (or other service providers) evaluate alternatives. It is available here.

Update September 16, 2012: After realizing recently that the tool has been unavailable for a long time after a domain change, I reviewed it and decided to add it to the growing Sodrat Suite for Translation Productivity, part of an Open Source resistance movement to the abusive complexities of ill-conceived technology in the translation profession.