* install uv and use it to create venv
* fix lint.sh to use all necessary deps
* make the test scripts use uv
* put uv into system path
* more clear setup.sh output
* don't look for uv in venv
* update the workflows to use uv
* lowercase pwndbg in upd message
* fix coverage invocation
* more robust test invocation
* pre-sync docs build
* don't pass venv to find_uv in [gdb/lldb]init
* uv sync before lint for more robustness
* make lldb work out of the box together with gdb
* don't uninstall dependancies when syncing
* modify scripts to use uv inside venv
* update workflows
* fix lint for scripts/
* update doc verifier workflow
* let nix magic check uv.lock
* use the venv as specified from venv in scripts so it works in docker
* add uv to project deps
* fix tests venv location
* revert uv venv lookup changes
* fix kernel tests
* fix nix
* work without venv, refactor code, packagers enjoy
* fix dockerfiles
* no posix; bash is my new best friend
* dont make venv in nix
* cleaned up paths
* Update gdbinit.py
* rebase: update link and uv lock
* Update lldbinit.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* Update scripts/common.sh
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* Update gdbinit.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* fixup bad rebase (setuptools)
* don't use UV if the .skip-venv file exists
* document the PWNDBG_PLEASE_SKIP_VENV option
* fix nix devshell
* Update lldbinit.py
* extend -> append
---------
Co-authored-by: Disconnect3d <dominik.b.czarnota@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* Add gdb_version to mock gdblib
* Re-enable unit tests
* Only collect unit test coverage if --cov is passed
* Source venv before running tests in github action
* Add venv path PATH in to Dockerfile
* Only check for "/ls" in `which` test
using the "PIP_NO_CACHE_DIR" env with pip install, make sure downloaded packages by pip don't cache on the system. This is a best practice that makes sure to fetch from a repo instead of using a local cached one. Further, in the case of Docker Containers, by restricting caching, we can reduce image size. In terms of stats, it depends upon the number of python packages multiplied by their respective size. e.g for heavy packages with a lot of dependencies it reduces a lot by don't cache pip packages.
Further, more detailed information can be found at
https://medium.com/sciforce/strategies-of-docker-images-optimization-2ca9cc5719b6
Signed-off-by: Pratik Raj <rajpratik71@gmail.com>
- remove submodules from all files
- bump flake.lock
- add gdb-pt-dump as dependency
- fix building Dockerfile
- fix gdb-pt-dump was broken on portable packages
* Specify dockerfile for ubuntu/debian
To add Dockerfile.arch later
* Support Arch Linux docker test
* Fix setup-dev supported distro
* Create set_zigpath function
* Download zig from upstream for archlinux
* Add hash as part of key for docker cache
as https://github.com/satackey/action-docker-layer-caching#inputs notes.