* Only run arch for testing
* Remove outdated arch repo
* Actually build the docker image
* Do not include site packages in sys.path
* Ignore `.relr.dyn` section; skip lines w/o spaces
Newer binaries can contain a `.relr.dyn` section to compress `R_X86_64_RELATIVE` relocation entries.
These binaries can be found for example on archlinux but also on Debian 12 for example.
`readelf` prints the content of the section similarly to this:
```
Relocation section '.relr.dyn' at offset 0x25220 contains 35 entries:
1198 offsets
00000000001ce8d0
00000000001ce8e0
```
Compared to `00000000001d2000 0000000000000025 R_X86_64_IRELATIVE 9f330` for
`.rela.plt`.
Pwndbg now chokes on the new format because it expects a space seperator where there is none.
It might be, that this is actually an upstream problem with binutils, because llvm-readelf prints this:
```
Relocation section '.relr.dyn' at offset 0x25220 contains 1198 entries:
Offset Info Type Symbol's Value Symbol's Name
00000000001ce8d0 0000000000000008 R_X86_64_RELATIVE
00000000001ce8e0 0000000000000008 R_X86_64_RELATIVE
```
Nevertheless, we aren't actually interested in `R_X86_64_RELATIVE` relocations so I guess it's fine to
just skip all lines that contain no spaces at all.
`.relr.dyn` can only containt `R_X86_64_RELATIVE` relocations as far as I understand
https://maskray.me/blog/2021-10-30-relative-relocations-and-relr
* Accept Full RELRO in test
Archlinux has libc and ld with Full RELRO.
We now just accept Partial and Full RELRO.
* Do not copy binaries from host to docker
The `Dockerfile` copies the whole pwndbg folder to the image.
If we have built binaries on the host before, these binaries will contain references to
the host system and *copied* to the image.
If we now run `context code` (inside docker) to have a look at the source code this will
fail, because we will try to refer to a path on the host system.
* Do not use loop index after loop
Do not use loop index after the loop. The tests assumed that the loop in line 186
would run at least once, thereby *resetting* `i` to zero. If we never enter the
loop, `i` will *continue* to have the value it had at the end of line 172.
This will cause the test to fail in mysterious ways because `i` is now not reset
to zero but still has the value `31` for example.
The solution is to never use `i` outside of a loop.
* Re-enable archlinux and temporarily disabled ones
* Ignore .venv files in git and docker
* Only bind mount cwd for `main`
Bind mounting `.` in every case would interfere with .dockerignore
We want to ignore `.venv` so that the venv of the built docker image
is used. Otherwise we would use the venv of the host inside docker.
This would negate the whole point of testing in a docker container.
Bind mounting `.` is however useful if one wants to use docker just
for "sandboxing" while running the tests on the local machine.
---------
Co-authored-by: intrigus <abc123zeus@live.de>
* Fix coverage combine toml issue
This commit should fix this issue:
```
Run coverage combine
coverage combine
coverage xml
shell: /usr/bin/bash -e {0}
Can't read 'pyproject.toml' without TOML support. Install with [toml] extra
Error: Process completed with exit code 1.
```
* setup.sh: cleanup the --user flag since we use venv now
Cleans up the --user flag from setup.sh since it is unused after we changed setup.sh to install Python dependencies in a virtual environment
* Remove --user flag from CI workflows
* Fix codecov problem
We need to run the python `coverage` library to collect coverage.
However, gdb was failing to find it.
Recently, pwndbg moved to using venvs. When pwndbg is initialized
it setups the venv "manually", that is, no "source .venv/bin/activate"
is needed. When we run gdb tests, we pass the `gdbinit.py` of pwndbg as a
command to gdb to be executed like this:
`gdb --silent --nx --nh -ex 'py import coverage;coverage.process_startup()' --command PATH_TO_gdbinit.py`
The problem is that *order* matters. This means that *first* coverage
is imported (by `-ex py ...`) and only *then* the init script is executed.
When `coverage` is first imported, it's library search path only looks
in system libraries of python, and not the venv that gdbinit.py would load.
So we would try to import an old version of coverage and fail.
One solution would be to move around the commands, but this would be an
ugly hack IMHO. **Instead**, we should just tell gdb that this is an **init**
command that has to be executed before other commands.
Previously, the order did not matter. All of pwndbg's dependencies were
installed directly as system libraries to python. So the library search path
was the same before and after loading `gdbinit.py`.
---------
Co-authored-by: disconnect3d <dominik.b.czarnota@gmail.com>
Co-authored-by: intrigus <abc123zeus@live.de>
* Refactor the `got` command to support more use cases
- Create some function to parse the information of loaded shared object libraries from `info sharedlibrary`
- Make got command can show the entries of other libraries loaded in memory
- Make got command can show more various relocations to support not only the `JUMP_SLOT` type relocation but also supports `IRELATIVE` and `GLOB_DAT` type relocation.
* Update tests for the `got` command
* Update pwndbg/commands/got.py
* Update pwndbg/commands/got.py
* Update pwndbg/commands/got.py
* Update pwndbg/commands/got.py
* Update pwndbg/commands/got.py
* Update pwndbg/commands/got.py
* Update pwndbg/commands/got.py
* Update pwndbg/commands/got.py
* Update the comment
https://github.com/pwndbg/pwndbg/pull/1771#discussion_r1251054080
* Update the tests
* Add some hints for the qemu users
---------
Co-authored-by: Disconnect3d <dominik.b.czarnota@gmail.com>
* Change setup.sh to create & use Python virtualenv
The `setup.sh` script now creates a `.venv` directory during execution and installs all dependencies into that directory. Then, `gdbinit.py` will adds the proper `site-packages` directory as the first item of `sys.path`.
Fixes#1634.
* Improve RISCV support
This is a resurrection of #829
Co-authored-by: Tobias Faller <faller@endiio.com>
* Silence bogus vermin warning
* Fix relative backwards jump calculations
The target address wouldn't be truncated to the pointer size.
* Add basic qemu-user test
* Run qemu-user tests in CI
* Make shfmt happy
* Fix pwntools < 4.11.0 support
* Support RISCV32 for pwntools < 4.11.0 as well
---------
Co-authored-by: Tobias Faller <faller@endiio.com>
* Remove use of OnlyWhenRunning when we already have OnlyWhenHeapInitialized
* Remove use of OnlyWhenHeapInitialized when we already have OnlyWithTcache
* Add OnlyWhenUserspace Decorator #1459
* The decorator is implemented as the inverse of OnlyWhenQemuKernel
* Apply the decorator to all of the heap commands and tls, auxv and environ/envp
* Update pwndbg/commands/__init__.py
---------
Co-authored-by: Disconnect3d <dominik.b.czarnota@gmail.com>
This commit optimizes the `bin_ascii` function used by the `vis_heap_chunks` command.
That function executed the following line on each call:
```
valid_chars = list(map(ord, set(printable) - set("\t\r\n\x0c\x0b")))
```
And it could be called thousand times, e.g. 90k on a benchmark.
This commit moves the creation of the `valid_chars` list to the global space so it is computed only once.
As a result, on a simple benchmark we improved the speed of `vis_heap_chunks` command from 4.6s to 3s.
The `pwndbg.gdblib.regs.sp` value is cached and its cache is cleared on a next stop, memory write or register write events.
We keep a dictionary of stacks in Pwndbg, that are updated on each stop by the `stack.update` functionality which reused a cached stack pointer (`gdblib.regs.sp`) value.
As a result, if we had more than one threads, the `pwndbg.gdblib.stacks.stacks` reported the same stack address for all threads and then the `canary` command printed the same addresses N times where N is the number of threads that were running.
This commit fixes this bug by clearing up the registers cache when we switch into a different thread in the loop in the `stacks.update` function.
vmmap would try to add the executable to memory pages if the `info auxv`
command contained an address, but the memory maps would be accessed
recursively when trying to lookup the start of the ELF based on the
given address.
Since qemu doesn't provide memory map info, do a leap of faith and try
if the start of the page of the given address contains the ELF magic
header.
Since the program headers are more likely to be on the same page as the
ELF header than the program entrypoint, try both.