Add cache, fetch, retry logic to tests (#829)

* Add cache, fetch, retry logic to tests

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* run in parallel

* add pytest-xdist

* undo parallelism. Need to remove http server to enable.

* woops a extra space

* Pass flake8

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* spell fulfill

* use decorator for fetch if not in cache

* Fix --headed and limit to PlaywrightRequestError

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* docs on cache

* CICD caching of conda on unstable builds

* fix config issues

* empty commit to trigger gh-actions

* restore build-unstable

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Remove http server, add parallel

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* temp: Bypass zip runtime test and point to v0.21.3 on CDN

* suport for files in zip under /pyodide

* remove test-one

* self.http_server and remove content_type

* domcontentloaded w no timeout on base url + longer timeout on wait_for_pyscript

* Fixed #678

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* set default timeout to 60000

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* seamless --headed support

* add test-integration-parallel and default for GHActions

* simplify the code. Use http://fakeserver instead of localhost:8080 so that it's clearer that the browser is NOT hitting a real server, and use urllib to parse the url. Moreover, the special case for pyodide is no longer needed, it's automatically handled by the normal 'fakeserver' logic

* The page-routing logic is becoming too much complicated to stay as an inner function. Move it to its own class, and add some logic to workaround a limitation of playwright which just hangs if a Python exception is raised inside it

* no need to use a hash, we can use the url as the key

* re-implement the retry logic. The old @retry decorator was nice but a bit too over-engineered and most importantly failed silently in case of exceptions. This new approach is less powerful but since we want to retry only two times, simple is better than complex -- and in case of exception, the exception is actually raised

* improve logging

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Madhur Tandon <madhurtandon23@gmail.com>
Co-authored-by: Antonio Cuni <anto.cuni@gmail.com>
This commit is contained in:
Ted Patrick
2022-10-07 10:31:26 -05:00
committed by GitHub
parent 11a517bba4
commit f138b5a4f4
8 changed files with 157 additions and 61 deletions

View File

@@ -1,8 +1,4 @@
"""All data required for testing examples"""
import threading
from http.server import HTTPServer as SuperHTTPServer
from http.server import SimpleHTTPRequestHandler
import pytest
from .support import Logger
@@ -11,41 +7,3 @@ from .support import Logger
@pytest.fixture(scope="session")
def logger():
return Logger()
class HTTPServer(SuperHTTPServer):
"""
Class for wrapper to run SimpleHTTPServer on Thread.
Ctrl +Only Thread remains dead when terminated with C.
Keyboard Interrupt passes.
"""
def run(self):
try:
self.serve_forever()
except KeyboardInterrupt:
pass
finally:
self.server_close()
@pytest.fixture(scope="session")
def http_server(logger):
class MyHTTPRequestHandler(SimpleHTTPRequestHandler):
def log_message(self, fmt, *args):
logger.log("http_server", fmt % args, color="blue")
host, port = "127.0.0.1", 8080
base_url = f"http://{host}:{port}"
# serve_Run forever under thread
server = HTTPServer((host, port), MyHTTPRequestHandler)
thread = threading.Thread(None, server.run)
thread.start()
yield base_url # Transition to test here
# End thread
server.shutdown()
thread.join()