chore(i18n,learn): processed translations (#48171)

This commit is contained in:
camperbot
2022-10-21 19:04:50 +01:00
committed by GitHub
parent fa6a878095
commit e3c137263c
185 changed files with 517 additions and 517 deletions

View File

@@ -8,29 +8,29 @@ dashedName: book-recommendation-engine-using-knn
# --description--
You will be <a href="https://colab.research.google.com/github/freeCodeCamp/boilerplate-book-recommendation-engine/blob/master/fcc_book_recommendation_knn.ipynb" target="_blank" rel="noopener noreferrer nofollow">working on this project with Google Colaboratory</a>.
你将<a href="https://colab.research.google.com/github/freeCodeCamp/boilerplate-book-recommendation-engine/blob/master/fcc_book_recommendation_knn.ipynb" target="_blank" rel="noopener noreferrer nofollow">使用 Google Colaboratory 来完成这个项目</a>
After going to that link, create a copy of the notebook either in your own account or locally. Once you complete the project and it passes the test (included at that link), submit your project link below. If you are submitting a Google Colaboratory link, make sure to turn on link sharing for "anyone with the link."
进入该链接后,在你自己的账户或本地创建一个笔记本的副本。 一旦你完成项目并通过测试(包括在该链接),请在下面提交你的项目链接。 如果你提交的是 Google Colaboratory 的链接,请确保打开链接共享时选择 “anyone with the link”。
We are still developing the interactive instructional content for the machine learning curriculum. For now, you can go through the video challenges in this certification. You may also have to seek out additional learning resources, similar to what you would do when working on a real-world project.
我们仍在开发机器学习课程的交互式课程部分。 现在,你可以通过这个认证中的视频挑战。 你可能还需要寻找额外的学习资源,类似于你在真实世界项目中的工作。
# --instructions--
In this challenge, you will create a book recommendation algorithm using **K-Nearest Neighbors**.
在这个挑战中,你将使用 **K-Nearest Neighbors** 创建一个图书推荐算法。
You will use the <a href="http://www2.informatik.uni-freiburg.de/~cziegler/BX/" target="_blank" rel="noopener noreferrer nofollow">Book-Crossings dataset</a>. This dataset contains 1.1 million ratings (scale of 1-10) of 270,000 books by 90,000 users.
你将使用 <a href="http://www2.informatik.uni-freiburg.de/~cziegler/BX/" target="_blank" rel="noopener noreferrer nofollow">Book-Crossings 数据集</a>。 该数据集包括 90,000 名用户对 270,000 册书籍的 110 万份评分(评分从 1 至 10
After importing and cleaning the data, use `NearestNeighbors` from `sklearn.neighbors` to develop a model that shows books that are similar to a given book. The Nearest Neighbors algorithm measures the distance to determine the “closeness” of instances.
导入并清理数据后,使用 `sklearn.neighbors` 中的 `NearestNeighbors` 开发一个模型,显示与给定书籍相似的书籍。 最近邻算法测量距离以确定实例的“接近度”。
Create a function named `get_recommends` that takes a book title (from the dataset) as an argument and returns a list of 5 similar books with their distances from the book argument.
创建一个名为 `get_recommends` 的函数,它将书名(来自数据集)作为参数,并返回 5 本书的列表以及它们与书参数的距离。
This code:
这个代码:
```py
get_recommends("The Queen of the Damned (Vampire Chronicles (Paperback))")
```
should return:
应该返回:
```py
[
@@ -45,15 +45,15 @@ should return:
]
```
Notice that the data returned from `get_recommends()` is a list. The first element in the list is the book title passed into the function. The second element in the list is a list of five more lists. Each of the five lists contains a recommended book and the distance from the recommended book to the book passed into the function.
请注意,从 `get_recommends()` 返回的数据是一个列表。 列表中的第一个元素是传递给函数的书名。 列表中的第二个元素是另外五个列表的列表。 五个列表中的每一个都包含一本推荐书以及从推荐书到传递给函数的书的距离。
If you graph the dataset (optional), you will notice that most books are not rated frequently. To ensure statistical significance, remove from the dataset users with less than 200 ratings and books with less than 100 ratings.
如果你绘制数据集的图表(可选),你会注意到大多数书籍的评价并不频繁。 为了确保统计学上的显著性,从数据集中删除评分低于 200 的用户和评分低于 100 的书籍。
The first three cells import libraries you may need and the data to use. The final cell is for testing. Write all your code in between those cells.
前三个单元格导入你可能需要的库和要使用的数据。 最后一个单元用于测试。 在这些单元格之间写下所有代码。
# --hints--
It should pass all Python tests.
它应该通过所有的 Python 测试。
```js

View File

@@ -8,21 +8,21 @@ dashedName: cat-and-dog-image-classifier
# --description--
You will be <a href="https://colab.research.google.com/github/freeCodeCamp/boilerplate-cat-and-dog-image-classifier/blob/master/fcc_cat_dog.ipynb" target="_blank" rel="noopener noreferrer nofollow">working on this project with Google Colaboratory</a>.
你将<a href="https://colab.research.google.com/github/freeCodeCamp/boilerplate-cat-and-dog-image-classifier/blob/master/fcc_cat_dog.ipynb" target="_blank" rel="noopener noreferrer nofollow">使用 Google Colaboratory 来完成这个项目</a>
After going to that link, create a copy of the notebook either in your own account or locally. Once you complete the project and it passes the test (included at that link), submit your project link below. If you are submitting a Google Colaboratory link, make sure to turn on link sharing for "anyone with the link."
进入该链接后,在你自己的账户或本地创建一个笔记本的副本。 一旦你完成项目并通过测试(包括在该链接),请在下面提交你的项目链接。 如果你提交的是 Google Colaboratory 的链接,请确保打开链接共享时选择 “anyone with the link”。
We are still developing the interactive instructional content for the machine learning curriculum. For now, you can go through the video challenges in this certification. You may also have to seek out additional learning resources, similar to what you would do when working on a real-world project.
我们仍在开发机器学习课程的交互式课程部分。 现在,你可以通过这个认证中的视频挑战。 你可能还需要寻找额外的学习资源,类似于你在真实世界项目中的工作。
# --instructions--
For this challenge, you will complete the code to classify images of dogs and cats. You will use TensorFlow 2.0 and Keras to create a convolutional neural network that correctly classifies images of cats and dogs at least 63% of the time. (Extra credit if you get it to 70% accuracy!)
在这个挑战中,你将完成代码,对狗和猫的图像进行分类。 你将使用 Tensorflow 2.0 Keras 创建一个卷积神经网络,该网络至少 63% 的时间可以正确分类猫和狗的图像。 (如果你能达到 70% 的准确率,可以加分!)
Some of the code is given to you but some code you must fill in to complete this challenge. Read the instruction in each text cell so you will know what you have to do in each code cell.
有些代码是给你的,但有些代码你必须填写才能完成这个挑战。 阅读每个文本单元中的指令,你就会知道你在每个代码单元中要做什么。
The first code cell imports the required libraries. The second code cell downloads the data and sets key variables. The third cell is the first place you will write your own code.
第一个代码单元导入所需的库。 第二个代码单元下载数据并设置关键变量。 第三个单元格是你要写自己代码的第一个地方。
The structure of the dataset files that are downloaded looks like this (You will notice that the test directory has no subdirectories and the images are not labeled):
下载的数据集文件的结构看起来是这样的(你会注意到,测试目录没有子目录,图像也没有标示)。
```py
cats_and_dogs
@@ -35,20 +35,20 @@ cats_and_dogs
|__ test: [1.jpg, 2.jpg ...]
```
You can tweak epochs and batch size if you like, but it is not required.
如果你愿意,你可以调整历时和批次大小,但这不是必须的。
The following instructions correspond to specific cell numbers, indicated with a comment at the top of the cell (such as `# 3`).
下面的指令对应于特定的单元格编号,在单元格的顶部用注释表示(如 `# 3`)。
## Cell 3
Now it is your turn! Set each of the variables in this cell correctly. (They should no longer equal `None`.)
现在轮到你了! 正确设置此单元格中的每个变量。 (它们不应再等于 `None`。)
Create image generators for each of the three image data sets (train, validation, test). Use `ImageDataGenerator` to read / decode the images and convert them into floating point tensors. Use the `rescale` argument (and no other arguments for now) to rescale the tensors from values between 0 and 255 to values between 0 and 1.
为三个图像数据集(训练、验证、测试)中的每一个创建图像生成器。 使用 `ImageDataGenerator` 读取/解码图像并将它们转换为浮点张量。 使用 `rescale` 参数(目前没有其他参数)将张量从 0 到 255 之间的值重新缩放到 0 到 1 之间的值。
For the `*_data_gen` variables, use the `flow_from_directory` method. Pass in the batch size, directory, target size (`(IMG_HEIGHT, IMG_WIDTH)`), class mode, and anything else required. `test_data_gen` will be the trickiest one. For `test_data_gen`, make sure to pass in `shuffle=False` to the `flow_from_directory` method. This will make sure the final predictions stay is in the order that our test expects. For `test_data_gen` it will also be helpful to observe the directory structure.
对于 `*_data_gen` 变量,使用 `flow_from_directory` 方法。 传入批处理大小、目录、目标大小(`(IMG_HEIGHT, IMG_WIDTH)`)、类模式以及其他所需的内容。 `test_data_gen` 将是最棘手的一个。 对于 `test_data_gen`,确保将 `shuffle=False` 传递给 `flow_from_directory` 方法。 这将确保最终预测保持在我们的测试预期的顺序。 对于 `test_data_gen`,观察目录结构也很有帮助。
After you run the code, the output should look like this:
运行代码后,输出应如下所示:
```py
Found 2000 images belonging to 2 classes.
@@ -58,51 +58,51 @@ Found 50 images belonging to 1 class.
## Cell 4
The `plotImages` function will be used a few times to plot images. It takes an array of images and a probabilities list, although the probabilities list is optional. This code is given to you. If you created the `train_data_gen` variable correctly, then running this cell will plot five random training images.
`plotImages` 函数将多次用于绘制图像。 它需要一个图像数组和一个概率列表,尽管概率列表是可选的。 此代码已提供给你。 如果你正确地创建了 `train_data_gen` 变量,那么运行这个单元将绘制五个随机训练图像。
## Cell 5
Recreate the `train_image_generator` using `ImageDataGenerator`.
使用 `ImageDataGenerator` 重新创建 `train_image_generator`
Since there are a small number of training examples, there is a risk of overfitting. One way to fix this problem is by creating more training data from existing training examples by using random transformations.
由于训练样本数量很少,因此存在过度拟合的风险。 解决此问题的一种方法,是通过使用随机变换,从现有训练示例创建更多训练数据。
Add 4-6 random transformations as arguments to `ImageDataGenerator`. Make sure to rescale the same as before.
添加 4-6 个随机变换作为 `ImageDataGenerator` 的参数。 确保重新缩放与以前相同。
## Cell 6
You don't have to do anything for this cell. `train_data_gen` is created just like before but with the new `train_image_generator`. Then, a single image is plotted five different times using different variations.
你无需为此单元做任何事情。 `train_data_gen` 与以前一样创建,但使用新的 `train_image_generator`。 然后,使用不同的变化对单个图像进行五次不同的绘制。
## Cell 7
In this cell, create a model for the neural network that outputs class probabilities. It should use the Keras Sequential model. It will probably involve a stack of Conv2D and MaxPooling2D layers and then a fully connected layer on top that is activated by a ReLU activation function.
在此单元格中,为输出类别概率的神经网络创建一个模型。 它应该使用 Keras Sequential 模型。 它可能会涉及一堆 Conv2D MaxPooling2D 层,然后是一个由 ReLU 激活函数激活的全连接层。
Compile the model passing the arguments to set the optimizer and loss. Also pass in `metrics=['accuracy']` to view training and validation accuracy for each training epoch.
编译模型并传递参数以设置优化器和损失。 同时传入 `metrics=['accuracy']` 以查看每个训练周期的训练和验证精度。
## Cell 8
Use the `fit` method on your `model` to train the network. Make sure to pass in arguments for `x`, `steps_per_epoch`, `epochs`, `validation_data`, and `validation_steps`.
使用 `model` 上的 `fit` 方法来训练网络。 确保为 `x``steps_per_epoch``epochs``validation_data` `validation_steps` 传入参数。
## Cell 9
Run this cell to visualize the accuracy and loss of the model.
运行这个单元来观察模型的准确性和损失。
## Cell 10
Now it is time to use your model to predict whether a brand new image is a cat or a dog.
现在是时候使用你的模型,来预测一个全新的图像,是猫还是狗了。
In this cell, get the probability that each test image (from `test_data_gen`) is a dog or a cat. `probabilities` should be a list of integers.
在此单元格中,获取每个测试图像(来自 `test_data_gen`)是狗或猫的概率。 `probabilities` 应该是一个整数列表。
Call the `plotImages` function and pass in the test images and the probabilities corresponding to each test image.
调用 `plotImages` 函数并传入测试图像和每个测试图像对应的概率。
After you run the cell, you should see all 50 test images with a label showing the percentage of "sure" that the image is a cat or a dog. The accuracy will correspond to the accuracy shown in the graph above (after running the previous cell). More training images could lead to a higher accuracy.
在你运行该单元后,你应该看到所有 50 张测试图像,并有一个标签显示该图像是猫还是狗的“确定”百分比。 准确度将对应于上图中显示的准确度(在运行上一个单元格之后)。 更多的训练图像可能会导致更高的准确性。
## Cell 11
Run this final cell to see if you passed the challenge or if you need to keep trying.
运行这个最后的单元格,看看你是否通过了挑战,或者你是否需要继续努力。
# --hints--
It should pass all Python tests.
它应该通过所有的 Python 测试。
```js

View File

@@ -8,33 +8,33 @@ dashedName: linear-regression-health-costs-calculator
# --description--
You will be <a href="https://colab.research.google.com/github/freeCodeCamp/boilerplate-linear-regression-health-costs-calculator/blob/master/fcc_predict_health_costs_with_regression.ipynb" target="_blank" rel="noopener noreferrer nofollow">working on this project with Google Colaboratory</a>.
你将<a href="https://colab.research.google.com/github/freeCodeCamp/boilerplate-linear-regression-health-costs-calculator/blob/master/fcc_predict_health_costs_with_regression.ipynb" target="_blank" rel="noopener noreferrer nofollow">使用 Google Colaboratory 来完成这个项目</a>
After going to that link, create a copy of the notebook either in your own account or locally. Once you complete the project and it passes the test (included at that link), submit your project link below. If you are submitting a Google Colaboratory link, make sure to turn on link sharing for "anyone with the link."
进入该链接后,在你自己的账户或本地创建一个笔记本的副本。 一旦你完成项目并通过测试(包括在该链接),请在下面提交你的项目链接。 如果你提交的是 Google Colaboratory 的链接,请确保打开链接共享时选择 “anyone with the link”。
We are still developing the interactive instructional content for the machine learning curriculum. For now, you can go through the video challenges in this certification. You may also have to seek out additional learning resources, similar to what you would do when working on a real-world project.
我们仍在开发机器学习课程的交互式课程部分。 现在,你可以通过这个认证中的视频挑战。 你可能还需要寻找额外的学习资源,类似于你在真实世界项目中的工作。
# --instructions--
In this challenge, you will predict healthcare costs using a regression algorithm.
在这个挑战中,你将使用回归算法预测医疗费用。
You are given a dataset that contains information about different people including their healthcare costs. Use the data to predict healthcare costs based on new data.
你会得到一个数据集,其中包含不同人的信息,包括他们的医疗费用。 用数据来预测基于新数据的医疗费用。
The first two cells of this notebook import libraries and the data.
此笔记本的前两个单元格导入库和数据。
Make sure to convert categorical data to numbers. Use 80% of the data as the `train_dataset` and 20% of the data as the `test_dataset`.
确保将分类数据转换为数字。 将 80% 的数据用作 `train_dataset`,将 20% 的数据用作 `test_dataset`
`pop` off the "expenses" column from these datasets to create new datasets called `train_labels` and `test_labels`. Use these labels when training your model.
使用 `pop` 从这些数据集中移出“费用”列中,来创建名为 `train_labels` `test_labels` 的新数据集。 训练模型时使用这些标签。
Create a model and train it with the `train_dataset`. Run the final cell in this notebook to check your model. The final cell will use the unseen `test_dataset` to check how well the model generalizes.
创建一个模型并使用 `train_dataset` 对其进行训练。 运行本笔记本中的最后一个单元来检查你的模型。 最后一个单元格将使用看不见的 `test_dataset` 来检查模型的泛化程度。
To pass the challenge, `model.evaluate` must return a Mean Absolute Error of under 3500. This means it predicts health care costs correctly within $3500.
要通过挑战,`model.evaluate` 必须返回低于 3500 的平均绝对误差。 这意味着它可以正确地预测医疗保健费用在 3500 美元以内。
The final cell will also predict expenses using the `test_dataset` and graph the results.
最后一个单元格还将使用 `test_dataset` 预测费用并绘制结果图。
# --hints--
It should pass all Python tests.
它应该通过所有 Python 测试。
```js

View File

@@ -8,25 +8,25 @@ dashedName: neural-network-sms-text-classifier
# --description--
You will be <a href="https://colab.research.google.com/github/freeCodeCamp/boilerplate-neural-network-sms-text-classifier/blob/master/fcc_sms_text_classification.ipynb" target="_blank" rel="noopener noreferrer nofollow">working on this project with Google Colaboratory</a>.
你将<a href="https://colab.research.google.com/github/freeCodeCamp/boilerplate-neural-network-sms-text-classifier/blob/master/fcc_sms_text_classification.ipynb" target="_blank" rel="noopener noreferrer nofollow">使用 Google Colaboratory 来完成这个项目</a>
After going to that link, create a copy of the notebook either in your own account or locally. Once you complete the project and it passes the test (included at that link), submit your project link below. If you are submitting a Google Colaboratory link, make sure to turn on link sharing for "anyone with the link."
进入该链接后,在你自己的账户或本地创建一个笔记本的副本。 一旦你完成项目并通过测试(包括在该链接),请在下面提交你的项目链接。 如果你提交的是 Google Colaboratory 的链接,请确保打开链接共享时选择 “anyone with the link”。
We are still developing the interactive instructional content for the machine learning curriculum. For now, you can go through the video challenges in this certification. You may also have to seek out additional learning resources, similar to what you would do when working on a real-world project.
我们仍在开发机器学习课程的交互式课程部分。 现在,你可以通过这个认证中的视频挑战。 你可能还需要寻找额外的学习资源,类似于你在真实世界项目中的工作。
# --instructions--
In this challenge, you need to create a machine learning model that will classify SMS messages as either "ham" or "spam". A "ham" message is a normal message sent by a friend. A "spam" message is an advertisement or a message sent by a company.
在这个挑战中,你需要创建一个机器学习模型,将短信分类为 “ham” 或 “spam”。 “ham” 消息是朋友发送的正常消息。 “spam” 是一个公司发送的广告或信息。
You should create a function called `predict_message` that takes a message string as an argument and returns a list. The first element in the list should be a number between zero and one that indicates the likeliness of "ham" (0) or "spam" (1). The second element in the list should be the word "ham" or "spam", depending on which is most likely.
你应该创建一个名为 `predict_message` 的函数,该函数接收一个消息字符串作为参数并返回一个列表。 列表中的第一个元素应该是一个介于 0 和 1 之间的数字,表示 “ham”0或 “spam”1的可能性。 列表中的第二个元素应该是单词 “ham” 或 “spam”这取决于哪个最有可能。
For this challenge, you will use the <a href="http://www.dt.fee.unicamp.br/~tiago/smsspamcollection/" target="_blank" rel="noopener noreferrer nofollow">SMS Spam Collection</a> dataset. The dataset has already been grouped into train data and test data.
对于这个挑战,你将使用 <a href="http://www.dt.fee.unicamp.br/~tiago/smsspamcollection/" target="_blank" rel="noopener noreferrer nofollow">SMS Spam Collection 数据集</a>。 数据集已经被分组为训练数据和测试数据。
The first two cells import the libraries and data. The final cell tests your model and function. Add your code in between these cells.
前两个单元导入库和数据。 最后一个单元测试你的模型和功能。 在这些单元格之间添加你的代码。
# --hints--
It should pass all Python tests.
它应该通过所有的 Python 测试。
```js

View File

@@ -8,57 +8,57 @@ dashedName: rock-paper-scissors
# --description--
For this challenge, you will create a program to play Rock, Paper, Scissors. A program that picks at random will usually win 50% of the time. To pass this challenge your program must play matches against four different bots, winning at least 60% of the games in each match.
在这个挑战中,你将创建一个程序来玩石头、剪刀、布。 一个随机选取的程序通常会有 50% 的时间获胜。 要通过这一挑战,你的程序必须与四个不同的机器人进行对战,并达到至少 60% 胜率。
You will be <a href="https://replit.com/github/freeCodeCamp/boilerplate-rock-paper-scissors" target="_blank" rel="noopener noreferrer nofollow">working on this project with our Replit starter code</a>.
你将使用<a href="https://replit.com/github/freeCodeCamp/boilerplate-rock-paper-scissors" target="_blank" rel="noopener noreferrer nofollow">我们在 Replit 的初始化项目</a>来完成这个项目。
We are still developing the interactive instructional part of the machine learning curriculum. For now, you will have to use other resources to learn how to pass this challenge.
我们仍在开发机器学习课程的交互式课程部分。 现在,你需要使用其他资源来学习如何通过这一挑战。
# --instructions--
In the file `RPS.py` you are provided with a function called `player`. The function takes an argument that is a string describing the last move of the opponent ("R", "P", or "S"). The function should return a string representing the next move for it to play ("R", "P", or "S").
在文件 `RPS.py` 中,你会看到一个名为 `player` 的函数。 该函数接受一个参数该参数是一个字符串描述了对手的最后一步“R”、“P” 或 “S”。 该函数应返回一个字符串表示它要播放的下一步动作“R”、“P”或“S”
A player function will receive an empty string as an argument for the first game in a match since there is no previous play.
玩家函数将接收一个空字符串作为比赛中第一场比赛的参数,因为之前没有比赛。
The file `RPS.py` shows an example function that you will need to update. The example function is defined with two arguments (`player(prev_play, opponent_history = [])`). The function is never called with a second argument so that one is completely optional. The reason why the example function contains a second argument (`opponent_history = []`) is because that is the only way to save state between consecutive calls of the `player` function. You only need the `opponent_history` argument if you want to keep track of the opponent_history.
文件 `RPS.py` 显示了一个你需要更新的示例函数。 示例函数使用两个参数定义(`player(prev_play, opponent_history = [])`)。 该函数从不使用第二个参数调用,因此它是完全可选的。 示例函数包含第二个参数(`opponent_history = []`)的原因,是因为这是在连续调用 `player` 函数之间保存状态的唯一方法。 如果你想跟踪对手历史,你只需要 `opponent_history` 参数。
*Hint: To defeat all four opponents, your program may need to have multiple strategies that change depending on the plays of the opponent.*
*提示:为了打败所有四个对手,你的程序可能需要有多种策略,这些策略会根据对手的棋局而改变。*
## Development
## 开发
Do not modify `RPS_game.py`. Write all your code in `RPS.py`. For development, you can use `main.py` to test your code.
不要修改 `RPS_game.py`。 在 `RPS.py` 中编写所有代码。 对于开发,你可以使用 `main.py` 来测试你的代码。
`main.py` imports the game function and bots from `RPS_game.py`.
`main.py` `RPS_game.py` 导入游戏功能和机器人。
To test your code, play a game with the `play` function. The `play` function takes four arguments:
要测试你的代码,请使用 `play` 函数玩游戏。 `play` 函数有四个参数:
- two players to play against each other (the players are actually functions)
- the number of games to play in the match
- an optional argument to see a log of each game. Set it to `True` to see these messages.
- 两个玩家互相对战(玩家实际上是函数)
- 比赛的比赛场数
- 一个可选参数来查看每场比赛的日志。 将其设置为 `True` 以查看这些消息。
```py
play(player1, player2, num_games[, verbose])
```
For example, here is how you would call the function if you want `player` and `quincy` to play 1000 games against each other and you want to see the results of each game:
例如,如果你希望 `player` `quincy` 互相对战 1000 场比赛,并且你想查看每场比赛的结果,你将这样调用该函数:
```py
play(player, quincy, 1000, verbose=True)
```
Click the "run" button and `main.py` will run.
单击“运行”按钮,`main.py` 将运行。
## Testing
## 测试
The unit tests for this project are in `test_module.py`. We imported the tests from `test_module.py` to `main.py` for your convenience. If you uncomment the last line in `main.py`, the tests will run automatically whenever you hit the "run" button.
这个项目的单元测试在 `test_module.py` 中。 为了你的方便,我们将测试从 `test_module.py` 导入到 `main.py`。 如果你移除 `main.py` 中最后一行的注释,则只要你点击“运行”按钮,测试就会自动运行。
## Submitting
## 提交
Copy your project's URL and submit it below.
复制项目的 URL 并在下面提交。
# --hints--
It should pass all Python tests.
它应该通过所有的 Python 测试。
```js