-
-
Notifications
You must be signed in to change notification settings - Fork 655
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix 17050actions warning message #17098
Conversation
Currently user documentation such as the user guide are made translatable via a custom (and very old) translation system hosted by NV Access. For many reasons we need to move away from this old system to something more mainstream and maintainable. We have already successfully moved translation of NVDA interface messages to Crowdin, and we should do the same for the user guide and other documentation. Description of development approach • Added markdownTranslate.py, which contains several commands for generating and updating xliff files from markdown files. These xliff files can then be uploaded to Crowdin for translation, and eventually downloaded again and converted back to markdown files. Commands include: ◦ generateXliff: to generate an xliff file from a markdown file. Firstly a 'skeleton' of the markdown file is produced which is all the structure of a markdown file, but the translatable content on each line has been replaced by a special translation ID. Lines such as blank lines, hidden header rows, or table header separator lines are included in the skeleton in tact and are not available for translation. The xliff file is then produced, which contains one translatable string per translation unit, keyed by its respective translation ID. Each unit also contains translator notes to aide in translation, such as the line number, and any prefix or suffix markdown structure. E.g. a heading might have a prefix of ### and a suffix of {#Intro}. The skeleton is also embedded into the xliff file so that it is possible to update the xliff file keeping existing translation IDs, and or generate the existing markdown file from the xliff file. ◦ generateMarkdown: Given an xliff file, the original markdown file is reproduced from the embedded skeleton, using either the translated or source strings from the xliff file, depending on whether you want a translated or untranslated markdown file. ◦ updateXliff: to update an existing xliff file with changes from a markdown file, ensuring that IDs of existing translatable strings are kept in tact. This command extracts the skeleton from the xliff file, makes a diff of the old and new markdown files, then applies this diff to the skeleton file I.e. removes skeleton lines that were removed from the markdown file, and adds skeleton lines (with new IDs) for lines that are newly added to the markdown file. All existing lines stay as is, keeping their existing translation IDs. Finally a new xliff file is generated from the up to date markdown file and skeleton, resulting in an xliff file that contains all translatable strings from the new markdown file, but reusing translation IDs for existing strings. ◦ translateXliff: given an xliff file, and a pretranslated markdown file that matches the skeleton, a new xliff file is produced containing translations for all strings. ◦ pretranslateAllPossibleLangs: this walks the NVDA user_docs directory, and for each language, pretranslates the English xliff file using the existing pretranslated markdown file from the old translation system (if it matches the skeleton exactly) producing a translated xliff file that can be uploaded to Crowdin to bring an existing translation up to where it was in the old system. • Added a generated xliff file for the current English user guide markdown file. Note that this has been uploaded to Crowdin for translation. • Added a GitHub action that runs on the beta branch if English userGuide.md changes. The action regenerates the original markdown file from the current English user guide xliff, then updates the xliff file based on the changes from the original markdown file to the current markdown file. This xliff file is then uploaded to Crowdin, and also committed and pushed to beta.
The following actions use a deprecated Node.js version and will be forced to run on node20: actions/checkout@v3, actions/setup-python@v4. For more info: https://github.blog/changelog/2024-03-07-github-actions-all-actions-will-run-on-node20-instead-of-node16-by-default/ error message
WalkthroughThe changes introduce a new GitHub Actions workflow for automating updates to English user documentation and its XLIFF translation files. The Changes
Assessment against linked issues
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
Tip Early access features: enabledWe are currently testing the following features in early access:
Note:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
Outside diff range, codebase verification and nitpick comments (4)
tests/unit/test_markdownTranslate.py (1)
15-40
: LGTM: Test class setup is well-structured.The
TestMarkdownTranslate
class is properly set up withsetUp
andtearDown
methods for managing test resources. The helper methodrunMarkdownTranslateCommand
is a good practice for reducing code duplication.Consider adding type hints to the
runMarkdownTranslateCommand
method for improved readability:def runMarkdownTranslateCommand(self, description: str, args: list[str]) -> None:
sconstruct (1)
336-346
: Integration with existing build processWhile the new functionality to generate localized Markdown files from XLIFF is well-implemented, there are some considerations regarding its integration with the existing build process:
The existing process generates HTML files from Markdown files (lines 347-357). Consider updating this process to include the newly generated localized Markdown files.
The
userGuide
andkeyCommands
targets (lines 391-402) currently only process the English versions. You might want to extend these to handle localized versions as well.To fully integrate this new functionality, consider the following steps:
- Update the HTML generation process to include the newly created localized Markdown files.
- Modify the
userGuide
andkeyCommands
targets to generate localized versions of these documents.- Ensure that the distribution package (
dist
target) includes the localized documentation.These changes would ensure that the new localized content is fully utilized in the build and distribution process.
user_docs/markdownTranslate.py (2)
6-20
: Consider grouping imports for better readability.The imports are comprehensive and appropriate for the functionality of the script. However, consider grouping them into standard library imports, third-party imports, and local imports for better readability.
Here's a suggested regrouping:
# Standard library imports import argparse import contextlib import os import re import subprocess import tempfile import uuid from dataclasses import dataclass from itertools import zip_longest from typing import Generator from xml.sax.saxutils import escape as xmlEscape from xml.sax.saxutils import unescape as xmlUnescape # Third-party imports import difflib import lxml.etree
292-326
: Consider adding progress reporting toupdateXliff
function.The
updateXliff
function performs several operations but doesn't provide progress updates. Consider adding progress reporting to give users feedback on the current step being executed.You could add print statements or use a progress bar library like
tqdm
to show progress for each step.
- name: update xliff files | ||
shell: pwsh | ||
run: | | ||
# for any English markdown files changed within the commits of this push, | ||
# update the corresponding xliff file (if one exists) to reflect the current markdown file, | ||
# keeping existing translation IDs in tact. | ||
$ErrorActionPreference = 'Stop' | ||
$changedFiles = git diff --name-only ${{github.event.before}}.. -- user_docs/en/*.md | ||
foreach ($file in $changedFiles) { | ||
Write-Host "$file has changed" | ||
$baseName = [System.IO.Path]::GetFileNameWithoutExtension($file) | ||
$xliff = "user_docs/en/$baseName.xliff" | ||
$tempXliff = "user_docs/en/$baseName.xliff.temp" | ||
$markdown = $file | ||
if (Test-Path $xliff) { | ||
Write-Host "Updating $xliff with changes from $markdown" | ||
python user_docs/markdownTranslate.py updateXliff -x $xliff -m $file -o $tempXliff | ||
Write-Host "Renaming $tempXliff to $xliff" | ||
move-item -Path $tempXliff -Destination $xliff -Force | ||
} else { | ||
Write-Host "Ignoring $markdown as it does not have a corresponding xliff file" | ||
} | ||
} | ||
if: success() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
XLIFF update process is well-implemented, but could be optimized.
The script efficiently updates XLIFF files for changed markdown files. However, there's room for improvement:
- Consider using a more efficient git command to get changed files.
- The error handling could be more robust.
Consider the following optimizations:
- Replace the git command with:
-$changedFiles = git diff --name-only ${{github.event.before}}.. -- user_docs/en/*.md
+$changedFiles = git diff --name-only ${{github.event.before}} ${{github.sha}} -- user_docs/en/*.md
- Add error handling for the Python script execution:
python user_docs/markdownTranslate.py updateXliff -x $xliff -m $file -o $tempXliff
+if ($LASTEXITCODE -ne 0) {
+ Write-Error "Failed to update XLIFF file: $xliff"
+ exit 1
+}
- name: Commit and Push changes | ||
env: | ||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} | ||
run: | | ||
$ErrorActionPreference = 'Stop' | ||
git config --local user.name "GitHub Actions" | ||
git config --local user.email "[email protected]" | ||
git remote set-url origin https://x-access-token:${GITHUB_TOKEN}@github.com/${{ github.repository }}.git | ||
$filesChanged = git diff --name-only -- *.xliff | ||
if ($filesChanged) { | ||
Write-Host "xliff files were changed. Committing and pushing changes." | ||
foreach ($file in $filesChanged) { | ||
git add $file | ||
git commit -m "Update $file" | ||
} | ||
git push origin HEAD | ||
} else { | ||
Write-Host "No xliff files were changed. Skipping commit and push." | ||
} | ||
if: success() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Commit and push process is well-implemented, but could be more efficient.
The process correctly commits and pushes changes for modified XLIFF files. However, there's an opportunity to optimize the git operations.
Consider the following optimizations:
- Use a single commit for all changed files:
-foreach ($file in $filesChanged) {
- git add $file
- git commit -m "Update $file"
-}
+git add *.xliff
+git commit -m "Update XLIFF files"
- Use
git diff --quiet
for a more efficient check:
-$filesChanged = git diff --name-only -- *.xliff
-if ($filesChanged) {
+if (-not (git diff --quiet -- *.xliff)) {
- name: Crowdin upload | ||
# This step must only be run after successfully pushing changes to the repository. | ||
# Otherwise if the push fails, subsequent runs may cause new translation IDs to be created, | ||
# which will cause needless retranslation of existing strings. | ||
env: | ||
crowdinProjectID: ${{ vars.CROWDIN_PROJECT_ID }} | ||
crowdinAuthToken: ${{ secrets.CROWDIN_AUTH_TOKEN }} | ||
run: | | ||
# Check if we changed userGuide.xliff in this action. | ||
# If we did, upload it to Crowdin. | ||
$ErrorActionPreference = 'Stop' | ||
$changed = git diff --name-only ${{GITHUB.SHA}}.. -- user_docs/en/userGuide.xliff | ||
if ($changed) { | ||
Write-Host "Uploading userGuide.xliff to Crowdin" | ||
# 18 is the file ID for userGuide.xliff in Crowdin. | ||
python appVeyor/crowdinSync.py uploadSourceFile 18 user_docs/en/userguide.xliff | ||
} else { | ||
Write-Host "Not uploading userGuide.xliff to Crowdin as it has not changed" | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Crowdin upload process is correctly implemented, but could be improved.
The process checks for changes in userGuide.xliff and uploads it to Crowdin if changed. However, there are a few points to consider:
- The comment mentions a file ID (18) for userGuide.xliff in Crowdin. This might be better stored as a variable or environment variable for easier maintenance.
- The git command to check for changes could be more efficient.
- Error handling for the Python script execution could be improved.
Consider the following improvements:
- Store the Crowdin file ID as an environment variable:
env:
crowdinProjectID: ${{ vars.CROWDIN_PROJECT_ID }}
crowdinAuthToken: ${{ secrets.CROWDIN_AUTH_TOKEN }}
+ crowdinUserGuideFileID: 18
- Use a more efficient git command:
-$changed = git diff --name-only ${{GITHUB.SHA}}.. -- user_docs/en/userGuide.xliff
+$changed = git diff --quiet ${{GITHUB.SHA}} HEAD -- user_docs/en/userGuide.xliff
+if ($LASTEXITCODE -eq 1) {
- Add error handling for the Python script:
python appVeyor/crowdinSync.py uploadSourceFile 18 user_docs/en/userguide.xliff
+if ($LASTEXITCODE -ne 0) {
+ Write-Error "Failed to upload userGuide.xliff to Crowdin"
+ exit 1
+}
def test_markdownTranslate(self): | ||
outDir = self.outDir.name | ||
testDir = self.testDir | ||
self.runMarkdownTranslateCommand( | ||
description="Generate an xliff file from the English 2024.2 user guide markdown file", | ||
args=[ | ||
"generateXliff", | ||
"-m", | ||
os.path.join(testDir, "en_2024.2_userGuide.md"), | ||
"-o", | ||
os.path.join(outDir, "en_2024.2_userGuide.xliff"), | ||
], | ||
) | ||
self.runMarkdownTranslateCommand( | ||
description="Regenerate the 2024.2 markdown file from the generated 2024.2 xliff file", | ||
args=[ | ||
"generateMarkdown", | ||
"-x", | ||
os.path.join(outDir, "en_2024.2_userGuide.xliff"), | ||
"-o", | ||
os.path.join(outDir, "rebuilt_en_2024.2_userGuide.md"), | ||
"-u", | ||
], | ||
) | ||
self.runMarkdownTranslateCommand( | ||
description="Ensure the regenerated 2024.2 markdown file matches the original 2024.2 markdown file", | ||
args=[ | ||
"ensureMarkdownFilesMatch", | ||
os.path.join(outDir, "rebuilt_en_2024.2_userGuide.md"), | ||
os.path.join(testDir, "en_2024.2_userGuide.md"), | ||
], | ||
) | ||
self.runMarkdownTranslateCommand( | ||
description="Update the 2024.2 xliff file with the changes between the English 2024.2 and 2024.3beta6 user guide markdown files", | ||
args=[ | ||
"updateXliff", | ||
"-x", | ||
os.path.join(outDir, "en_2024.2_userGuide.xliff"), | ||
"-m", | ||
os.path.join(testDir, "en_2024.3beta6_userGuide.md"), | ||
"-o", | ||
os.path.join(outDir, "en_2024.3beta6_userGuide.xliff"), | ||
], | ||
) | ||
self.runMarkdownTranslateCommand( | ||
description="Regenerate the 2024.3beta6 markdown file from the updated xliff file", | ||
args=[ | ||
"generateMarkdown", | ||
"-x", | ||
os.path.join(outDir, "en_2024.3beta6_userGuide.xliff"), | ||
"-o", | ||
os.path.join(outDir, "rebuilt_en_2024.3beta6_userGuide.md"), | ||
"-u", | ||
], | ||
) | ||
self.runMarkdownTranslateCommand( | ||
description="Ensure the regenerated 2024.3beta6 markdown file matches the original 2024.3beta6 markdown", | ||
args=[ | ||
"ensureMarkdownFilesMatch", | ||
os.path.join(outDir, "rebuilt_en_2024.3beta6_userGuide.md"), | ||
os.path.join(testDir, "en_2024.3beta6_userGuide.md"), | ||
], | ||
) | ||
self.runMarkdownTranslateCommand( | ||
description="Translate the 2024.3beta6 xliff file to French using the existing pretranslated French 2024.3beta6 user guide markdown file", | ||
args=[ | ||
"translateXliff", | ||
"-x", | ||
os.path.join(outDir, "en_2024.3beta6_userGuide.xliff"), | ||
"-l", | ||
"fr", | ||
"-p", | ||
os.path.join(testDir, "fr_pretranslated_2024.3beta6_userGuide.md"), | ||
"-o", | ||
os.path.join(outDir, "fr_2024.3beta6_userGuide.xliff"), | ||
], | ||
) | ||
self.runMarkdownTranslateCommand( | ||
description="Regenerate the French 2024.3beta6 user guide markdown file from the French translated 2024.3beta6 xliff file", | ||
args=[ | ||
"generateMarkdown", | ||
"-x", | ||
os.path.join(outDir, "fr_2024.3beta6_userGuide.xliff"), | ||
"-o", | ||
os.path.join(outDir, "fr_2024.3beta6_userGuide.md"), | ||
], | ||
) | ||
self.runMarkdownTranslateCommand( | ||
description="Ensure the regenerated French 2024.3beta6 user guide markdown file matches the original French 2024.3beta6 user guide markdown file", | ||
args=[ | ||
"ensureMarkdownFilesMatch", | ||
os.path.join(outDir, "fr_2024.3beta6_userGuide.md"), | ||
os.path.join(testDir, "fr_pretranslated_2024.3beta6_userGuide.md"), | ||
], | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider splitting the test method into smaller, more focused test cases.
While the test_markdownTranslate
method is comprehensive and covers various scenarios, it might be beneficial to split it into smaller, more focused test cases. This approach would improve maintainability and make it easier to identify which specific functionality fails if an error occurs.
Consider refactoring the test method into separate test cases, each focusing on a specific functionality:
test_generate_xliff_from_markdown
test_regenerate_markdown_from_xliff
test_update_xliff_with_changes
test_translate_xliff_to_french
This refactoring would make the tests more modular and easier to maintain. For example:
def test_generate_xliff_from_markdown(self):
outDir = self.outDir.name
testDir = self.testDir
self.runMarkdownTranslateCommand(
description="Generate an xliff file from the English 2024.2 user guide markdown file",
args=[
"generateXliff",
"-m",
os.path.join(testDir, "en_2024.2_userGuide.md"),
"-o",
os.path.join(outDir, "en_2024.2_userGuide.xliff"),
],
)
# Add assertions to verify the generated XLIFF file
def test_regenerate_markdown_from_xliff(self):
# Similar structure for testing markdown regeneration
# ...
# Additional test methods for other functionalities
This approach would make it easier to identify which specific functionality fails if an error occurs and would improve the overall maintainability of the test suite.
def runMarkdownTranslateCommand(self, description: str, args: list[str]): | ||
failed = False | ||
try: | ||
subprocess.run([sys.executable, self.markdownTranslateScriptPath, *args], check=True) | ||
except subprocess.CalledProcessError: | ||
failed = True | ||
if failed: | ||
message = f"Failed when trying to {description} with command: {' '.join(args)}" | ||
self.fail(message) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Enhance error handling and logging in runMarkdownTranslateCommand
.
The current error handling captures subprocess errors, but it doesn't provide detailed information about the failure. This could make debugging difficult, especially for complex test scenarios.
Consider enhancing the error handling to provide more detailed information:
def runMarkdownTranslateCommand(self, description: str, args: list[str]) -> None:
try:
result = subprocess.run(
[sys.executable, self.markdownTranslateScriptPath, *args],
check=True,
capture_output=True,
text=True
)
except subprocess.CalledProcessError as e:
error_message = (
f"Failed when trying to {description} with command: {' '.join(args)}\n"
f"Exit code: {e.returncode}\n"
f"STDOUT: {e.stdout}\n"
f"STDERR: {e.stderr}"
)
self.fail(error_message)
This enhancement captures both stdout and stderr, providing more context for debugging if a command fails. It also includes the exit code in the error message.
def skeletonizeLine(mdLine: str) -> str | None: | ||
prefix = "" | ||
suffix = "" | ||
if ( | ||
mdLine.isspace() | ||
or mdLine.strip() == "[TOC]" | ||
or re_hiddenHeaderRow.match(mdLine) | ||
or re_postTableHeaderLine.match(mdLine) | ||
): | ||
return None | ||
elif m := re_heading.match(mdLine): | ||
prefix, content, suffix = m.groups() | ||
elif m := re_bullet.match(mdLine): | ||
prefix, content = m.groups() | ||
elif m := re_number.match(mdLine): | ||
prefix, content = m.groups() | ||
elif m := re_tableRow.match(mdLine): | ||
prefix, content, suffix = m.groups() | ||
elif m := re_kcTitle.match(mdLine): | ||
prefix, content, suffix = m.groups() | ||
elif m := re_kcSettingsSection.match(mdLine): | ||
prefix, content, suffix = m.groups() | ||
elif re_comment.match(mdLine): | ||
return None | ||
ID = str(uuid.uuid4()) | ||
return f"{prefix}$(ID:{ID}){suffix}\n" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider simplifying the skeletonizeLine
function.
The skeletonizeLine
function uses multiple regular expressions and conditional statements. Consider refactoring it to use a dictionary of regular expressions and their corresponding actions to make it more maintainable.
Here's a suggested refactoring:
def skeletonizeLine(mdLine: str) -> str | None:
patterns = {
re_heading: lambda m: (m.group(1), m.group(2), m.group(3)),
re_bullet: lambda m: (m.group(1), m.group(2), ""),
re_number: lambda m: (m.group(1), m.group(2), ""),
re_tableRow: lambda m: (m.group(1), m.group(2), m.group(3)),
re_kcTitle: lambda m: (m.group(1), m.group(2), m.group(3)),
re_kcSettingsSection: lambda m: (m.group(1), m.group(2), m.group(3)),
}
if mdLine.isspace() or mdLine.strip() == "[TOC]" or re_hiddenHeaderRow.match(mdLine) or re_postTableHeaderLine.match(mdLine):
return None
for pattern, action in patterns.items():
if m := pattern.match(mdLine):
prefix, content, suffix = action(m)
ID = str(uuid.uuid4())
return f"{prefix}$(ID:{ID}){suffix}\n"
if re_comment.match(mdLine):
return None
ID = str(uuid.uuid4())
return f"$(ID:{ID})\n"
def pretranslateAllPossibleLanguages(langsDir: str, mdBaseName: str): | ||
# This function walks through all language directories in the given directory, skipping en (English) and translates the English xlif and skel file along with the lang's pretranslated md file | ||
enXliffPath = os.path.join(langsDir, "en", f"{mdBaseName}.xliff") | ||
if not os.path.exists(enXliffPath): | ||
raise ValueError(f"English xliff file {enXliffPath} does not exist") | ||
allLangs = set() | ||
succeededLangs = set() | ||
skippedLangs = set() | ||
for langDir in os.listdir(langsDir): | ||
if langDir == "en": | ||
continue | ||
langDirPath = os.path.join(langsDir, langDir) | ||
if not os.path.isdir(langDirPath): | ||
continue | ||
langPretranslatedMdPath = os.path.join(langDirPath, f"{mdBaseName}.md") | ||
if not os.path.exists(langPretranslatedMdPath): | ||
continue | ||
allLangs.add(langDir) | ||
langXliffPath = os.path.join(langDirPath, f"{mdBaseName}.xliff") | ||
if os.path.exists(langXliffPath): | ||
print(f"Skipping {langDir} as the xliff file already exists") | ||
skippedLangs.add(langDir) | ||
continue | ||
try: | ||
translateXliff( | ||
xliffPath=enXliffPath, | ||
lang=langDir, | ||
pretranslatedMdPath=langPretranslatedMdPath, | ||
outputPath=langXliffPath, | ||
allowBadAnchors=True, | ||
) | ||
except Exception as e: | ||
print(f"Failed to translate {langDir}: {e}") | ||
continue | ||
rebuiltLangMdPath = os.path.join(langDirPath, f"rebuilt_{mdBaseName}.md") | ||
try: | ||
generateMarkdown( | ||
xliffPath=langXliffPath, | ||
outputPath=rebuiltLangMdPath, | ||
) | ||
except Exception as e: | ||
print(f"Failed to rebuild {langDir} markdown: {e}") | ||
os.remove(langXliffPath) | ||
continue | ||
try: | ||
ensureMarkdownFilesMatch(rebuiltLangMdPath, langPretranslatedMdPath, allowBadAnchors=True) | ||
except Exception as e: | ||
print(f"Rebuilt {langDir} markdown does not match pretranslated markdown: {e}") | ||
os.remove(langXliffPath) | ||
continue | ||
os.remove(rebuiltLangMdPath) | ||
print(f"Successfully pretranslated {langDir}") | ||
succeededLangs.add(langDir) | ||
if len(skippedLangs) > 0: | ||
print(f"Skipped {len(skippedLangs)} languages already pretranslated.") | ||
print(f"Pretranslated {len(succeededLangs)} out of {len(allLangs) - len(skippedLangs)} languages.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider adding parallel processing to pretranslateAllPossibleLanguages
function.
The pretranslateAllPossibleLanguages
function processes languages sequentially. For improved performance, consider using parallel processing to handle multiple languages simultaneously.
You could use the concurrent.futures
module to implement parallel processing:
import concurrent.futures
def pretranslateAllPossibleLanguages(langsDir: str, mdBaseName: str):
# ... (existing code)
def process_language(langDir):
# ... (existing code for processing a single language)
with concurrent.futures.ThreadPoolExecutor() as executor:
futures = [executor.submit(process_language, langDir) for langDir in allLangs if langDir != "en"]
for future in concurrent.futures.as_completed(futures):
try:
future.result()
except Exception as e:
print(f"Failed to process language: {e}")
# ... (existing code for printing results)
Link to issue number:
fix #17050
Summary of the issue:
Fix actions The following actions use a deprecated Node.js version and will be forced to run on node20: actions/checkout@v3, actions/setup-python@v4. For more info: https://github.blog/changelog/2024-03-07-github-actions-all-actions-will-run-on-node20-instead-of-node16-by-default/
Description of user facing changes
No user changes
Description of development approach
Upgrade actions/checkout@v3, actions/setup-python@v4
Upgrade to the latest stable version
to get rid of the warning message above
Testing strategy:
Not yet.
Known issues with pull request:
Not yet.
Code Review Checklist:
Summary by CodeRabbit
New Features
Bug Fixes
Tests