A castle on the sand
Neural networks are black boxes, and this characteristic is shared by language models, multimodal models, and even agent systems built on neural networks. A black box means that we fundamentally cannot know the reasoning process that leads to a particular output. This makes it difficult to predict the consequences of changes to the model.
Until recently, everything seemed to work well, but now problems are starting to surface. Excessive flattery, hallucinations, and abnormal behaviors are appearing even in top-tier AI services.…
Read more ⟶Formalize Everything
Language models are being used to assist human work across various fields, including coding and documentation, based on their ability to address both natural and formal language. However, we should now be able to use language models to define new problems and solve them. Specification writing using formal language is a prime example of such problems.
Writing specifications in formal language is crucial for verification and accuracy, but it requires so much time and effort that it is rarely done except for very important projects.…
Read more ⟶Fuzzing-Guided Static Analysis: From a Reinforcement Learning Perspective
Modern fuzzing techniques (e.g., Directed Fuzzing) heavily rely on static analysis information to guide their fuzzing campaigns. However, the success of these campaigns can be significantly limited when static analysis produces imprecise results, often due to its dependence on human-designed heuristics.
The key insight is that fuzzing results provide ground truth about program behavior. This information could help refine static analysis in two ways: recovering information lost due to unsound analysis and eliminating spurious results from overly conservative analysis.…
Read more ⟶