Ideas for Student Projects

Various ideas which were listed on https://cs.anu.edu.au/cybersec/student_projects/, when I was supervising student projects at ANU (where a full-time course load is 24 ‘units’ per semester). I’m not currently working on these, but happy to offer some guidance to anyone interested in taking them on.

Improved Coverage-Guided Fuzzing for Hypothesis

https://github.com/Zac-HD/hypofuzz/issues has a list of substantial upgrades I’d love to see for the HypoFuzz fuzzer, ranging from new features based on the literature to a substantial overhaul of the architecture to unlock runtime autoscaling across any number of cores.

Upgrading Strategies for Datetimes and Text

Hypothesis users describe possible inputs to their tests using “strategies”, and so making those more likely to generate a failure-inducing input is one of the tricks that make Hypothesis so effective. It’s the kind of magic which turns out to be someone doing a lot of work behind the scenes, and in this project that’s where you come in!

One of our oldest open issues suggests biasing st.datetimes() towards bug-revealing values. Another complementary issue has similar suggestions for st.timezones().

Unicode text is also much more complicated than you might think - even if you agree with this statement! We’d like to add charset= argument to st.characters(), generally improve the distribution of individual codepoints, and support more precise constraints on the st.from_regex() strategy to help people find more bugs without needing to be experts in low-level text processing.

Symbolic Execution of Property-Based Tests

You’ve probably heard that “testing can be used to show the presence of bugs, but never to show their absence” - usually as an argument for formal methods like symbolic execution. On the other hand, testing is often easier and finds bugs in formally verified systems too. But why not use both?

Crosshair is an SMT-based symbolic execution tool for Python. Providing an interface or extension to check Hypothesis tests, like propverify in Rust or DeepState in C and C++, would give users the best of both worlds. See these notes for details.

We’ve already shipped a nice proof-of-concept integration, so the remaining work is to make it production-ready: https://github.com/HypothesisWorks/hypothesis/issues/3086

Fuzzing with Protobuf or ASN.1 schemas

Many programs communicate via binary formats such as ASN.1 or protocol buffers, and so automatically generating valid messages is an attractive testing technique. In this project, you would write a function to take a schema, optionally with customised ‘strategies’ for sub-structures, and return a Hypothesis strategy which generates such messages.

To take this from a six-unit to a twelve-unit project, you could search for bugs, characterise how it combines with e.g. targeted PBT (like IJON or FuzzFactory) or the benefits of integrated test-case reduction, etc. - it’s well-defined to start with but there’s plenty of extensions if you finish the core quickly. hypothesis-010 and hypothesis-jsonschema are good examples of related projects.

Extending the Hypothesis Ghostwriter

The Hypothesis Ghostwriter can automatically generate property-based tests for six different properties based on source-code introspection - without needing to execute the test like previous work (Randoop, Evosuite, QuickSpec, FuzzGen, etc).

Enhancements could add more properties, or use more sophisticated introspection logic to determine the required argument types for unannotated parameters. For a larger, probably Honours-scale, project you could implement an alternative mode which does include execution of the speculative tests - allowing users to trade away runtime and safety for improved test code.

https://github.com/HypothesisWorks/hypothesis/issues/2855 is a good starting point. Note that ghostwriting is surprisingly un-glamorous, and consists mostly of handling a large pile of edge cases…

Generating graph data-structures

There is significant interest in generating graph and graph-like data structures with Hypothesis (see e.g. hypothesis-geojson, hypothesis-geometry, and hypothesis-networkx). There is also a substantial literature on random generation of these structures, but this prior art generally focusses on generating uniform distributions (vs heuristic ‘bug-finding’ distributions). Designing the generation strategy for locality of mutation and efficient shrinking is also a novel-for-graphs problem, though there is plenty of prior art in Hypothesis for other objects.

This is likely to be a twelve-unit project to produce and ship useful open source code; though proof-of-concept and deeper projects are both possible.

Did you implement the model you verified?

TLA+ formally verifies properties of your system design, but not the implementation. We can gain confidence that we have implemented the spec by checking execution traces from testing or even production traffic, or with a custom simulation harness. See also this experience report, though I think stateful testing will provide a better toolkit than they had.

A demonstration system connecting model-based testing to TLA+ verification would be a significant advance in the state of the art, suitable for an Honours student or as a twelve-unit project if you are already comfortable with TLA+.

An adversarial fair scheduler

Async programming frameworks like Trio or the standard-library asyncio offer substantial speedups for IO-bound code, but may introduce new bugs that only manifest under unusual scheduler orders of the subtasks. rr chaos mode is interesting prior art, albeit for threads rather than an async task model.

So what if we could use a custom scheduler for tests which attempts to induce bugs? I think this would uncover many bugs, but raises some further research questions. For example: should we generate distinct inputs, or only task schedules? The former covers more states; the latter admits the property “same result regardless of schedule”. Are adversarial schedules useful on real-world programs, or do they just reveal uninteresting failures? And what about fault injection (adding delays or possible IO errors)? This could reveal many bugs, but requires substantial work to give an acceptable user experience.

This project could be a good twelve-unit or Honours project, or you could use a semester project to prototype and scope your Honours research.

Repurposing PyPy’s tracing JIT to measure code coverage

Fuzzing in Python usually sees a 2x-70x slowdown due to the overhead of coverage measurement; so reducing this would obviously be very valuable. PyPy is a Python implementation based on a just-in-time compiler (JIT) - which offers a large speedup on repetitive workloads like fuzzing, but considerably worse relative performance with coverage. Having the JIT tracer output coverage measurements directly could eliminate this overhead entirely, improving fuzzing performance by two orders of magnitude.

This project would be good for an ambitious Honours student with an interest in programming language implementation, especially just-in-time compilation, and would likely lead to a published paper and widespready use if successfully implemented.