On Scaling Neurosymbolic Programming through Guided Logical Inference
Résumé
Probabilistic neurosymbolic learning seeks to integrate neural networks with symbolic programming.
Many state-of-the-art systems rely on a reduction to the Probabilistic Weighted Model Counting Problem (PWMC), which requires computing a Boolean formula called the logical provenance.
However, PWMC is \#P-hard, and the number of clauses in the logical provenance formula can grow exponentially, creating a major bottleneck that significantly limits the applicability of PNL solutions in practice.
We propose a new approach centered around an exact algorithm DPNL, that enables bypassing the computation of the logical provenance.
The DPNL approach relies on the principles of an oracle and a recursive DPLL-like decomposition in order to guide and speed up logical inference.
Furthermore, we show that this approach can be adapted for approximate reasoning with $\epsilon$ or $(\epsilon, \delta)$ guarantees, called ApproxDPNL.
Experiments show significant performance gains.
DPNL enables scaling exact inference further, resulting in more accurate models.
Further, ApproxDPNL shows potential for advancing the scalability of neurosymbolic programming by incorporating approximations even further, while simultaneously ensuring guarantees for the reasoning process.
Origine | Fichiers produits par l'(les) auteur(s) |
---|