mirror of
https://github.com/pyscript/pyscript.git
synced 2026-02-17 10:01:09 -05:00
Add additional pre-commit hooks (#245)
* Add and run end-of-file-fixer * Add and run trailing-whitespace * Add and run check-yaml * Add and run check-json * Add and run pretty-format-yaml * Fix comment indentation
This commit is contained in:
@@ -8,13 +8,13 @@
|
||||
|
||||
<link rel="stylesheet" href="../build/pyscript.css" />
|
||||
<script defer src="../build/pyscript.js"></script>
|
||||
|
||||
|
||||
<py-env>
|
||||
- micrograd
|
||||
- numpy
|
||||
- matplotlib
|
||||
</py-env>
|
||||
|
||||
|
||||
<link href="https://cdn.jsdelivr.net/npm/bootstrap@5.1.3/dist/css/bootstrap.min.css" rel="stylesheet" crossorigin="anonymous">
|
||||
</head>
|
||||
|
||||
@@ -22,25 +22,25 @@
|
||||
<h1>Micrograd - A tiny Autograd engine (with a bite! :))</h1><br>
|
||||
<div>
|
||||
<p>
|
||||
<a href="https://github.com/karpathy/micrograd">Micrograd</a> is a tiny Autograd engine created
|
||||
by <a href="https://twitter.com/karpathy">Andrej Karpathy</a>. This app recreates the
|
||||
<a href="https://github.com/karpathy/micrograd/blob/master/demo.ipynb">demo</a>
|
||||
he prepared for this package using pyscript to train a basic model, written in Python, natively in
|
||||
<a href="https://github.com/karpathy/micrograd">Micrograd</a> is a tiny Autograd engine created
|
||||
by <a href="https://twitter.com/karpathy">Andrej Karpathy</a>. This app recreates the
|
||||
<a href="https://github.com/karpathy/micrograd/blob/master/demo.ipynb">demo</a>
|
||||
he prepared for this package using pyscript to train a basic model, written in Python, natively in
|
||||
the browser. <br>
|
||||
</p>
|
||||
</div>
|
||||
<div>
|
||||
<p>
|
||||
You may run each Python REPL cell interactively by pressing (Shift + Enter) or (Ctrl + Enter).
|
||||
You may run each Python REPL cell interactively by pressing (Shift + Enter) or (Ctrl + Enter).
|
||||
You can also modify the code directly as you wish. If you want to run all the code at once,
|
||||
not each cell individually, you may instead click the 'Run All' button. Training the model
|
||||
not each cell individually, you may instead click the 'Run All' button. Training the model
|
||||
takes between 1-2 min if you decide to 'Run All' at once. 'Run All' is your only option if
|
||||
you are running this on a mobile device where you cannot press (Shift + Enter). After the
|
||||
model is trained, a plot image should be displayed depicting the model's ability to
|
||||
you are running this on a mobile device where you cannot press (Shift + Enter). After the
|
||||
model is trained, a plot image should be displayed depicting the model's ability to
|
||||
classify the data. <br>
|
||||
</p>
|
||||
<p>
|
||||
Currently the <code>></code> symbol is being imported incorrectly as <code>&gt;</code> into the REPL's.
|
||||
Currently the <code>></code> symbol is being imported incorrectly as <code>&gt;</code> into the REPL's.
|
||||
In this app the <code>></code> symbol has been replaced with <code>().__gt__()</code> so you can run the code
|
||||
without issue. Ex: intead of <code>a > b</code>, you will see <code>(a).__gt__(b)</code> instead. <br>
|
||||
</p>
|
||||
@@ -99,7 +99,7 @@ print("number of parameters", len(model.parameters()))
|
||||
</py-repl><br>
|
||||
|
||||
<div>
|
||||
Line 24 has been changed from: <br>
|
||||
Line 24 has been changed from: <br>
|
||||
<code>accuracy = [(yi > 0) == (scorei.data > 0) for yi, scorei in zip(yb, scores)]</code><br>
|
||||
to: <br>
|
||||
<code>accuracy = [((yi).__gt__(0)) == ((scorei.data).__gt__(0)) for yi, scorei in zip(yb, scores)]</code><br>
|
||||
@@ -108,7 +108,7 @@ print("number of parameters", len(model.parameters()))
|
||||
<py-repl auto-generate="true">
|
||||
# loss function
|
||||
def loss(batch_size=None):
|
||||
|
||||
|
||||
# inline DataLoader :)
|
||||
if batch_size is None:
|
||||
Xb, yb = X, y
|
||||
@@ -116,10 +116,10 @@ def loss(batch_size=None):
|
||||
ri = np.random.permutation(X.shape[0])[:batch_size]
|
||||
Xb, yb = X[ri], y[ri]
|
||||
inputs = [list(map(Value, xrow)) for xrow in Xb]
|
||||
|
||||
|
||||
# forward the model to get scores
|
||||
scores = list(map(model, inputs))
|
||||
|
||||
|
||||
# svm "max-margin" loss
|
||||
losses = [(1 + -yi*scorei).relu() for yi, scorei in zip(yb, scores)]
|
||||
data_loss = sum(losses) * (1.0 / len(losses))
|
||||
@@ -127,7 +127,7 @@ def loss(batch_size=None):
|
||||
alpha = 1e-4
|
||||
reg_loss = alpha * sum((p*p for p in model.parameters()))
|
||||
total_loss = data_loss + reg_loss
|
||||
|
||||
|
||||
# also get accuracy
|
||||
accuracy = [((yi).__gt__(0)) == ((scorei.data).__gt__(0)) for yi, scorei in zip(yb, scores)]
|
||||
return total_loss, sum(accuracy) / len(accuracy)
|
||||
@@ -138,25 +138,25 @@ print(total_loss, acc)
|
||||
<py-repl auto-generate="true">
|
||||
# optimization
|
||||
for k in range(20): #was 100. Accuracy can be further improved w/ more epochs (to 100%).
|
||||
|
||||
|
||||
# forward
|
||||
total_loss, acc = loss()
|
||||
|
||||
|
||||
# backward
|
||||
model.zero_grad()
|
||||
total_loss.backward()
|
||||
|
||||
|
||||
# update (sgd)
|
||||
learning_rate = 1.0 - 0.9*k/100
|
||||
for p in model.parameters():
|
||||
p.data -= learning_rate * p.grad
|
||||
|
||||
|
||||
if k % 1 == 0:
|
||||
print(f"step {k} loss {total_loss.data}, accuracy {acc*100}%")
|
||||
</py-repl><br>
|
||||
<div>
|
||||
<p>
|
||||
Please wait for the training loop above to complete. It will not print out stats until it
|
||||
Please wait for the training loop above to complete. It will not print out stats until it
|
||||
has completely finished. This typically takes 1-2 min. <br><br>
|
||||
|
||||
Line 9 has been changed from: <br>
|
||||
|
||||
Reference in New Issue
Block a user