
WEEK 1
1. Who's Who in Localization?
A major part of the class this week was to identify who does what in a typical localization request on the vendor side.
News for me, this part emphasized that quality is influenced by many roles, including those not traditionally labeled as “QA.” Sales influences quality by setting expectations and validating feasibility. Project managers influence quality through planning, communication, and vendor coordination. Linguists and reviewers directly execute quality-related tasks, but they do not operate in isolation. Before this class, to be honest, I often associate quality ownership too narrowly with execution roles, rather than seeing it as distributed across the workflow.
2. Quiz Reflection
The short quiz associated with this module exposed some gaps in my understanding of role boundaries. Reviewing this mistake reinforced an important lesson. From a quality management perspective, quality risks are often introduced before a project even reaches production. Intake, feasibility checks, and expectation-setting are quality-critical moments, even though they occur outside traditional QA processes. This correction reminded me that quality management starts earlier than we normally assume.
3. Who Is Responsible for Quality?
While Harry did not provide a definitive answer to this question in class, based on my own experience, I came away with the view that while quality-related tasks can be distributed, responsibility itself cannot be fully delegated. No single role can absorb total accountability, but quality also cannot be treated as “everyone’s job” in a vague sense.
Instead, responsibility for quality shifts across stages of the workflow. Different roles carry greater responsibility at different points, depending on where quality risks are most likely to be introduced.
WEEK 2
1. Quality Depends on Where You Sit
Harry shared an observation from our Week 1 homework which I found interesting and never had a chance to think before: when people describe quality as end users, they tend to describe it emotionally. When people describe quality as someone in the supply chain or production flow, they become more objective and requirement-driven.
True. Different people care about different quality aspects. Somethings are simply assumed by users, like safety, even if they do not list it as a “quality criteria”. If a client is not the final user of the localized product, their quality definition may be indirectly shaped by business goals, timelines, or internal processes rather than actual user experience. That creates a risk: we might satisfy the client’s stated expectations but still miss what our users actually need.
2. The Client Is Not the Final User, So “Quality” Has an Extra Layer
E.g., App or product UI localization where the company is the buyer but not the reader, versus cases like legal or internal documents where the buyer is also the user.
This matters because it changes how I interpret client feedback. When the client is not the end user, “quality” can sometimes mean internal usability, risk reduction, or operational fit. If I only chase linguistic perfection, I might miss what the business actually considers a successful outcome.
3. The Quality Guru
The core content of Week 2 was built around the pre-class video (https://www.youtube.com/watch?v=d7qpjsRbg6c) and the idea that quality has multiple definitions.

This was the first time in the course where “quality” stopped being a vague value word and became a practical negotiation problem. If quality can mean “fit for purpose” and also “free of defects,” then the real work is figuring out which one is driving decision-making for this specific project.
4. Turning “Quality” Into Follow-Up Questions
A big part of the class was asking: if a client says “I want a high-quality translation,” is that statement helpful. The answer was basically “not really,” unless we translate it into more clarified questions.
For example, asking directly about tone or narratives can backfire because the client may not be a linguist, or may expect us to infer tone from the source; Timeline expectations need to be specific, not vague ASAP, and PMs should build buffer rather than passing deadlines verbatim downstream.
WEEK 3
1. The SoW as the 1st Documented Quality Guardrail
A central idea this week was treating the Statement of Work as the first documented quality guardrail.
The SoW is the structured version of the clarified client request.

Even though a full SoW may contain many elements, the minimum version in localization must clearly state what is to be done, by when, and for how much.
From a quality perspective, however, the more important question is how different sections of the SoW map to quality requirements.
If quality means fit for purpose, then scope and objectives matter. If quality means free of defects, then testing and standards matter. If quality means value for money, then schedule and payment terms matter.
2. Finding and Instructing the Right Linguists
The next quality guardrail discussed was finding and instructing the right linguists.
This selection process is not random. It requires thinking about key elements such as language pairs, task type, location, and tools.
Instruction was framed more simply but critical. Linguists must receive clear info about tasks scope, volume, deadline, and payment, along with any additional context that affects execution.
Based on my own experience, this connects directly to the “doing it right the first time” definition of quality from Week 2. If instructions are incomplete or ambiguous, rework becomes almost inevitable. Quality failure in this stage often shows up later as endless revision cycles, not immediate errors or red flags.
3. Sanity Checking
One of the most practical discussions this week was around sanity checking. The quiz raised in class was whether we should simply flip the translation back to the client once it is done. The implied answer was clearly no.
Even if we cannot understand the target language, there are still many checks we can perform. For example: file type, formatting, layout, completeness, measurement units, number consistency, date and time formats, scientific names, and structural alignment, etc.
This part hints me that quality management is not limited to linguistic expertise. Through these procedural steps, many visible client-facing failures could have been caught up earlier.
4. Subjective and Objective Requirements
For me, this week’s discussion raised an interesting point about blurred boundaries between the basic six quality categories and whether this strict categorization is the goal.
I believe the real task here is identifying what is most important to the client and how to translate that into objective workflow deliverables.
End users often have their own subjective expectations, while people in the workflow must operate with objective, requirement-driven constraints. This feels like the core tension of quality management. The client might say, “I want it to feel premium.” The workflow needs something more concrete, such as tone guidelines, terminology preferences, or review thresholds. Our job is to bridge that gap.
Week 4
1. The Three Pillars of Quality Management
The core concept of this week focused on three pillars of localization, specifically Translation Memory, Termbase or Glossary, and Style Guide. Even though these all felt quite familiar, I realize there are still a few interesting aspects I hadn’t noticed before.
TM:
TMs are built from finalized source target pairs and stored for reuse. They are commonly separated by content type, file type, locale, client, vendor, or time. TMs can be penalized to reflect trust level or relevance to the current task.


Glossary:
Glossaries can be developed through three ways: client initial knowledge, initial content analysis, and work-in-progress. Terminology work needs coordination, feedback, and validation, often requiring access to “content specialists” on the client side.
Style Guide:
For style guides, the points are more pragmatic. What you can build depends on what you have on hand, what the client has, and how willing they are to answer questions. Localization style guides often govern translation quality rather than content creation, so certain content creation guardrails can be dropped in many cases.
2. Reuse Strategy
One point Harry put forward resonates deeply in me: TMs are not inherently good, their value depends on provenance, review status, and relevance to the content at our hand. That is why separation, sequencing, and penalties exist. They are governance mechanisms for trust. Reuse is powerful, but blindly trusting them can occasionally create systematic errors at scale.
WEEK 5
1. The TEP Process
The TEP process: Translation, Editing, and Proofreading. The workflow is linear. Work moves forward from one person to the next, and the assumption is that each step improves the output.
But here comes the question: this classic process depends heavily on one big assumption: the next person is more qualified and will not make the translation worse.
What if it isn’t?
2. The Hidden Risk (When the Reviewer Is Not Better Than the Translator)
In typical workflows, it is totally possible that the reviewer is less capable than the translator. As a PM assigning roles, you may not actually know who is stronger on a given domain, language pair, or content type. This breaks the default logic of TEP.
3. The Consensus-Building Workflow
The alternative workflow discussed was Translation, Review, Implementation, with arbitration as the mechanism that turns disagreement into a final decision. In this workflow, the reviewer primarily suggests changes instead of directly overwriting the translation. If there is disagreement, the translator can defend choices and the reviewer vice versa. Only mutually agreed changes are implemented in the end.
This feels much more robust.
4. Speed, Cost, and the Tradeoff
Here comes the operational tradeoffs. TEP is typically cheaper and faster, and in theory it can be done with one person at minimum. The TRI workflow is typically slower and more expensive because it requires at least two people and usually generates friction or back-and-forth.
So, this then end up more like a business decision. Different content types and risk levels justify different workflows.
5. Feedback Loops
In quality management, if you want long-term improvement and consistency, you need a feedback mechanism that does not rely on extra manual effort.
In TEP, feedback may require separate efforts to send changes back as an extra process. In the TRI workflow, the translator learns as part of the process because they are actively involved in resolving suggestions and disagreements.
6. The Practical Implication of ISO 17100
We also reviewed ISO 17100 and its requirement that translations be revised by a second person. Harry highlighted that a PM sanity check does not count toward this requirement because PMs are typically not capable of revising the text linguistically. This helped connect workflow choices to client expectations. Some clients may not ask for ISO explicitly, but if they operate in regulated or procurement-heavy environments, they may assume it.
In my opinion, this is a good reminder that quality standards are not just about language. They are also about process evidence. In some contexts, the workflow itself is part of the deliverable.
WEEK 6
1. Last week’s QA checker exercise
This week started with a quick reflection on last week’s QA checker exercise. One point Harry emphasized was that most automated QA systems are designed to catch mechanical issues such as punctuation, number mismatches, missing tags, spacing issues, or formatting inconsistencies. But these systems are not built to evaluate meaning, tone, or contextual accuracy. In other words, they help detect technical problems, not linguistic quality.
In practice, automated QA works best when it focuses on predictable and repeatable checks.
2. The Calibration Problem
Another issue then came up. QA tools sometimes flag things that are not actually errors. When that happens repeatedly, the tool stops being helpful and starts slowing reviewers down. This then becomes a calibration problem. If the checker is too strict, reviewers waste time investigating harmless flags. If it is too loose, real issues slip through.
3. The Cost of Prevention
The main concept this week was the cost of quality, starting with cost of prevention. Prevention includes all the steps taken before delivery to reduce the chance of errors, such as better translators, additional review processes, clearer instructions, or more robust workflows.
Quality improvement is not linear. Early improvements are relatively cheap and produce significant gains. But as we push toward near perfect output, the marginal cost rises quickly.
4. The Cost of Failure

Unlike prevention cost, failure cost depends on what errors slip through, whether the client notices, and how they react.
Even when a review stage exists, reviewers do not catch everything. Some errors inevitably reach the client. The outcome might be endless rework, complaints, or a more serious loss of trust.
This is like rolling a dice. If you roll the dice once, you might get lucky. But if you repeat the same risky setup over and over again, eventually the bad outcome becomes almost guaranteed.
This framing helped me see prevention spending differently. It is not just about improving quality. It is also about reducing uncertainty.
5. “Internal” vs “External” Failures
Internal failures are issues caught before delivery. They create rework, delays, or additional reviewer effort, but the client never sees them.
External failures are issues caught after delivery. These include client complaints, visible mistakes, brand damage, or even loss of contracts.
6. High Risk Content
High risk content includes health and safety information, high visibility marketing materials, or anything tied directly to brand reputation. Low risk content might include informational text that few users will ever read.
This brings me back to an earlier topic from the course. Quality is always tied to purpose. The appropriate workflow depends on the potential impact of failure.