Minghao Liu^{1},David M. Cerna^{2},Filipe Gouveia^{1},Andrew Cropper^{1}

###### Abstract

Knowledge refactoring compresses a logic program by introducing new rules.Current approaches struggle to scale to large programs.To overcome this limitation, we introduce a constrained optimisation refactoring approach.Our first key idea is to encode the problem with decision variables based on literals rather than rules.Our second key idea is to focus on linear invented rules.Our empirical results on multiple domains show that our approach can refactor programs quicker and with more compression than the previous state-of-the-art approach, sometimes by 60%.

## 1 Introduction

Knowledge refactoring is a key component of human intelligence (Rumelhart and Norman 1976).Knowledge refactoring is also important in AI, such as for policy reuse in planning (Bonet, Drexler, and Geffner 2024)and to improve performance in program synthesis (Ellis etal. 2018; Dumančić, Guns, and Cropper 2021).The goal is to compress a knowledge base, such as a logic program, by introducing new rules.

To illustrate knowledge refactoring, consider the logic program:

$\mathcal{P}_{1}=\left\{\begin{array}[]{l}\emph{g(A)}\leftarrow\emph{p(A), q(A,B), r(B), s(A,B)}\\\emph{g(A)}\leftarrow\emph{p(A), q(A,B), r(B), t(A,B)}\\\emph{g(A)}\leftarrow\emph{p(B), q(B,C), r(C), w(A,B)}\\\emph{g(A)}\leftarrow\emph{p(A), q(B,A), r(A), z(A,B)}\end{array}\right\}$ |

This program has 4 rules, each with 5 literals.Thus, the size of this program is 20 (literals).

We could add a new $\emph{aux}_{1}$ rule to refactor $\mathcal{P}_{1}$ as:

$\mathcal{P}_{2}=\left\{\begin{array}[]{l}\emph{aux${}_{1}$(A,B) $\leftarrow$ {p}(A), {q}(A,B), {r}(B)}\\\emph{g(A) $\leftarrow$ {aux}${}_{1}$(A,B), {s}(A,B)}\\\emph{g(A) $\leftarrow$ {aux}${}_{1}$(A,B), {t}(A,B)}\\\emph{g(A) $\leftarrow$ {aux}${}_{1}$(B,C), {w}(A,B)}\\\emph{g(A) $\leftarrow$ {p}(A), {q}(B,A), {r}(A), {z}(A,B)}\end{array}\right\}$ |

The size of $\mathcal{P}_{2}$ is smaller than $\mathcal{P}_{1}$ (18 vs 20 literals) yet syntactically equivalent to $\mathcal{P}_{1}$ after unfolding^{1}^{1}1Unfolding means replacing all *aux*_{1} literals in the body with its definition literals and eliminating *aux*_{1} from the program. A formal description is in Section3. (Tamaki and Sato 1984).

A limitation of current refactoring approaches is scalability (Ellis etal. 2018; Cao etal. 2023; Bowers etal. 2023).For instance, Knorf (Dumančić, Guns, and Cropper 2021) frames the refactoring problem as a constrained optimisation problem (COP) (Rossi, VanBeek, and Walsh 2006).Knorf builds a set of invented rules by enumerating all subsets of the rules in an input program.Knorf then uses a COP solver to find a subset of the invented rules that maximally compresses the input program.However, because it enumerates all subsets, Knorf struggles to scale to programs with large rules and to programs with many rules.

To overcome this scalability limitation, we introduce a novel refactoring approach.Our first key contribution is a new COP formulation.Instead of enumerating all subsets of rules, we use decision variables to determine whether a literal is used in an invented rule.Our new formulation has two key advantages.First, the number of decision variables required is exponentially reduced.Second, an invented rule can use any combination of literals in the input program, rather than a strict subset of an input rule.

To illustrate the benefit of our first contribution, consider the input program $\mathcal{P}_{1}$.The invented rule $\emph{aux}_{1}$ cannot refactor the last rule as any instance of $\emph{aux}_{1}$ cannot cover the literals *p(A), q(B,A), r(A)* simultaneously. However, we could invent the rule $\emph{aux}_{2}$ to refactor the program as:

$\mathcal{P}_{3}=\left\{\begin{array}[]{l}\emph{aux${}_{2}$(A,B,C) $\leftarrow$ {p}(A), {q}(B,C), {r}(C)}\\\emph{g(A) $\leftarrow$ {aux}${}_{2}$(A,A,B), {s}(A,B)}\\\emph{g(A) $\leftarrow$ {aux}${}_{2}$(A,A,B), {t}(A,B)}\\\emph{g(A) $\leftarrow$ {aux}${}_{2}$(B,B,C), {w}(A,B)}\\\emph{g(A) $\leftarrow$ {aux}${}_{2}$(A,B,A), {z}(A,B)}\end{array}\right\}$ |

The body of $\emph{aux}_{2}$ is not a subset of any rule in $\mathcal{P}_{1}$,soKnorf could not invent $\emph{aux}_{2}$.By contrast, because we allow an invented rule to use any literal, we can invent $\emph{aux}_{2}$.The program $\mathcal{P}_{3}$ is smaller than $\mathcal{P}_{2}$ (16 vs 18 literals) yet is syntactically equivalent to $\mathcal{P}_{1}$ after unfolding.As this example shows, our first contribution allows our approach to find better (smaller) refactorings.

Our second key contribution is to use *linear invented rules* where (i) the body literals may not occur in the input program, and (ii) the size may be larger than any rule in the input program.

To illustrate this idea, consider the program:

$\mathcal{Q}_{1}=\mathcal{P}_{1}\cup\left\{\begin{array}[]{l}\emph{g(A) $\leftarrow$ {p}(A), {p}(B), {q}(A,B), {r}(B)}\\\emph{g(A) $\leftarrow$ {p}(A), {q}(A,B), {q}(B,C), {r}(C)}\end{array}\right\}$ |

Although we can use *aux*_{1} and *aux*_{2} to refactor $\mathcal{Q}_{1}$, we can find a smaller refactoring by introducing a *linear invented rule* *aux _{3}* to refactor $\mathcal{Q}_{1}$ as:

$\mathcal{Q}_{2}=\left\{\begin{array}[]{l}\emph{{aux}${}_{3}$(A,B,C,D,E,F,G) $\leftarrow$ {p}(A), {p}(B), {q}(C,D),}\\\hskip 99.0pt\emph{{q}(E,F), {r}(G)}\\\emph{g(A) $\leftarrow$ {aux}${}_{3}$(A,A,A,B,A,B,B), {s}(A,B)}\\\emph{g(A) $\leftarrow$ {aux}${}_{3}$(A,A,A,B,A,B,B), {t}(A,B)}\\\emph{g(A) $\leftarrow$ {aux}${}_{3}$(B,B,B,C,B,C,C), {w}(A,B)}\\\emph{g(A) $\leftarrow$ {aux}${}_{3}$(A,A,B,A,B,A,A), {z}(A,B)}\\\emph{g(A) $\leftarrow$ {aux}${}_{3}$(A,B,A,B,A,B,B)}\\\emph{g(A) $\leftarrow$ {aux}${}_{3}$(A,A,A,B,B,C,C)}\end{array}\right\}$ |

This refactoring reduces the size from 30 $(\mathcal{Q}_{1})$ to 22 $(\mathcal{Q}_{2})$.

In a linear invented rule, the variables occur linearly in the body literals.We prove (Theorem 2) that if a program can be refactored using any invented rule then it can be refactored to the same size (or smaller) using a linear invented rule.This contribution allows our approach to find smaller refactorings.

### Novelty and Contributions

The three main novelties of this paper are (i) the theoretical proof of the complexity of the optimal knowledge refactoring problem, (ii) a concise and efficient encoding of the problem as a COP, and (iii) demonstrating the effectiveness of our approach on large-scale and real-world benchmarks.

Overall, our contributions are:

- •
We introduce the

*optimal knowledge refactoring*problem, where the goal is to compress a logic program by inventing rules.We prove that the problem is $\mathcal{NP}$-hard. - •
We introduce MaxRefactor, which solves the optimal knowledge refactoring problem by formulating it as a COP.

- •
We evaluate our approach on multiple benchmarks.Our results show that, compared to the state-of-the-art approach, MaxRefactor can improve the compression rate by 60%.Our results also show that MaxRefactor scales well to large and real-world programs.

## 2 Related Work

Knowledge refactoring.Knowledge refactoring is important in many areas of AI (Bonet, Drexler, and Geffner 2024), notably in program synthesis (Ellis etal. 2018; Bowers etal. 2023; Cao etal. 2023; Hocquette, Dumancic, and Cropper 2024).For instance, Dumančić, Guns, and Cropper (2021) show that learning from refactored knowledge can substantially improve predictive accuracies of an inductive logic programming system and reduce learning times because the knowledge is structured in a more reusable way and redundant knowledge is removed.

Model reformulation. There is much research on reformulating constraint satisfaction problem (CSP) models automatically (O’Sullivan 2010; Charlier etal. 2017; Vo 2020).The main categories include implied constraints generation (Charnley, Colton, and Miguel 2006; Bessiere, Coletta, and Petit 2007), symmetry and dominance breaking (Liberti 2012; Mears and dela Banda 2015), pre-computation (Cadoli and Mancini 2006), and cross-modeling language translation (Drake etal. 2002).We differ because we work with logic programs and invent rules.

Redundancy and compression.Knowledge refactoring is distinct from knowledge redundancy, which is useful in many areas of AI, such as in Boolean Satisfiability (Heule etal. 2015).For instance, Plotkin (1971) introduced a method to remove logically redundant literals and clauses from a logical theory.Similarly, theory compression (DeRaedt etal. 2008) approaches select a subset of clauses such that performance is minimally affected with respect to a cost function.We differ from redundancy elimination and theory compression because we restructure knowledge by inventing rules.

Predicate invention.We refactor a logic program by introducing predicate symbols that do not appear in the input program, which is known as predicate invention (Kramer 2020).Predicate invention is a major research topic in program synthesis and inductive logic programming (Kok and Domingos 2007; Muggleton, Lin, and Tamaddoni-Nezhad 2015; Hocquette and Muggleton 2020; Jain etal. 2021; Silver etal. 2023; Cerna and Cropper 2024).We contribute to this topic by developing an efficient and scalable method to invent predicate symbols to compress a logic program.

Program refactoring.In program synthesis, many researchers (Ellis etal. 2018; Bowers etal. 2023; Cao etal. 2023) refactor functional programs by searching for local changes (new $\lambda$-expressions) that optimise a cost function.We differ from these approaches because we (i) consider logic programs, (ii) use a declarative solving paradigm (COP), and (iii) guarantee optimal refactoring.Alps (Dumančić etal. 2019) compresses facts in a logic program, whereas we compress rules.

Knorf.The most similar work is Knorf (Dumančić, Guns, and Cropper 2021).Given a logic program $\mathcal{P}$ as input, Knorf works as follows.For each rule $r\in\mathcal{P}$, Knorf enumerates every subset $s$ of $r$.For each subset $s$ and for each combination $h$ of the variables in $s$, Knorf creates a new rule $\emph{aux}_{s_{h}}$.Knorf then creates a COP problem where there is a decision variable for each $\emph{aux}_{s_{h}}$.It also creates decision variables to state whether a rule $r\in\mathcal{P}$ is refactored using $\emph{aux}_{s_{h}}$.Knorf then uses a COP solver to find a subset of the invented rules that leads to a refactoring with maximal compression.Knorf has many scalability issues.Foremost, it enumerates all subsets of all rules and thus struggles to scale to programs with large rules and many rules.Specifically, for a program with $n$ rules and a maximum rule size $k$, Knorf uses $O(n2^{k})$ decision variables.By contrast, our MaxRefactor approach does not enumerate all subsets of rules.Instead, as we describe in Section 4, we define a new rule $\emph{aux}_{i}$ by creating decision variables to represent whether any literal in $\mathcal{P}$ is in $\emph{aux}_{i}$.Besides, we create decision variables to state whether a rule and a literal in $\mathcal{P}$ is refactored using $\emph{aux}_{i}$.Since the number of invented rules is a predefined constant, MaxRefactor only needs $O(nk)$ decision variables.Finally, we propose the most general refactoring, which can expand the space for invented rules, enabling MaxRefactor to find better refactorings than Knorf.

## 3 Problem Setting

We focus on refactoring knowledge in the form of a logic program, specifically a definite logic program with the least Herbrand model semantics (Lloyd 2012).For simplicity, we use the term logic program to refer to a definite logic program.We assume familiarity with logic programming but restate some key terms.A logic program is a set of *rules* of the form:

$h\leftarrow a_{1},a_{2},\dots,a_{m}$ |

where $h$ is the *head literal* and $a_{1},\dots,a_{m}$ are the *body literals*.A *literal* is a predicate symbol with one or more variables.For example, *parent(A,B)* is a literal with the predicate symbol *parent* and two variables *A* and $B$.The predicate symbol of a literal $a$ is denoted as $pred(a)$ and the set of its variables is denoted as $var(a)$.The head literal of a rule is true if and only if all the body literals are true.The head literal of a rule $r$ is denoted as $head(r)$ and the set of body literals is denoted as $body(r)$.The *size* of a rule $r$ is defined as $size(r)=|body(r)|+1$, which is the total number of literals.The *size* of a logic program $\mathcal{P}$ is defined as $size(\mathcal{P})=\sum_{r\in\mathcal{P}}{size(r)}$.

### 3.1 Knowledge Refactoring

Our goal is to reduce the size of a logic program whilst preserving its semantics.However, checking the semantic equivalence between two logic programs is undecidable (Shmueli 1987).Therefore, we focus on finding syntactically equivalent refactorings.Syntactic equivalence implies semantic equivalence but the reverse is not necessarily true.

We check syntactic equivalence by *unfolding* (Tamaki and Sato 1984) refactored programs.Unfolding is a transformation in logic programming.We adapt the unfolding definition of Nienhuys-Cheng and Wolf (1997) to (i) resolve multiple literals, whereas the original definition only resolves one literal each time, and (ii) prohibit variables that only appear in the body of a rule:

###### Definition 1 (Rule unfolding).

Given a logic program $\mathcal{P}=\{c_{1},c_{2},\dots,c_{N}\}$ and a rule $r$, where $r$ does not have variables that only appear in its body, then unfolding $\mathcal{P}$ upon $r$ means constructing the logic program $\mathcal{Q}=\{d_{1},d_{2},\dots,d_{N}\}$, where each $d_{i}$ is the resolvent of $c_{i}$ and $r$ if $head(r)$ is unifiable with any literal in $body(c_{i})$, otherwise $d_{i}=c_{i}$.

###### Example 1 (Rule unfolding).

Consider the program:

$\mathcal{P}=\left\{\begin{array}[]{l}\emph{g(A) $\leftarrow$ {p}(A), {aux}(A,B)}\\\emph{g(A) $\leftarrow$ {p}(B), {p}(C), {aux}(A,B), {aux}(A,C)}\\\emph{g(A) $\leftarrow$ {p}(B), {q}(A,B), {r}(B)}\end{array}\right\}$ |

and the rule $r:\emph{aux(A,B) $\leftarrow$ {p}(B), {q}(A,B)}$. The result of unfolding $\mathcal{P}$ upon $r$ is:

$\mathcal{Q}=\left\{\begin{array}[]{l}\emph{g(A) $\leftarrow$ {p}(A), {p}(B), {q}(A,B)}\\\emph{g(A) $\leftarrow$ {p}(B), {p}(C), {q}(A,B), {q}(A,C)}\\\emph{g(A) $\leftarrow$ {p}(B), {q}(A,B), {r}(B)}\end{array}\right\}$ |

Given a set of rules $S$, $\textsf{unfold}(\mathcal{P},S)$ is the result of successively unfolding $\mathcal{P}$ upon every rule in $S$.

To refactor a logic program, we want to introduce invented rules:

###### Definition 2 (Invented rule).

Given a logic program $\mathcal{P}$ and a finite set of variables $\mathcal{X}$, an invented rule $r$ satisfies four conditions:

- 1.
$pred(head(r))\notin\mathcal{P}$

- 2.
$\forall a\in body(r),\;pred(a)\in\mathcal{P}$

- 3.
$\forall a\in body(r),\;var(a)\subseteq\mathcal{X}$

- 4.
$var(head(r))=\bigcup_{a\in body(r)}{var(a)}$

In this paper, we use the prefix *aux* to denote the predicate symbol of an invented rule.

We want to refactor an input rule using invented rules.We call such rules *refactored rules*:

###### Definition 3 (Refactored rule).

Let $r$ be a rule and $S$ be a set of invented rules.Then $r^{\prime}$ is a refactored rule of $r$ iff $\textsf{unfold}(\{r^{\prime}\},S)=\{r\}$.For a logic program $\mathcal{P}$ and a set of invented rules $S$, $\mathcal{I}(\mathcal{P},S)$ denotes the set of all possible refactored rules of rules in $\mathcal{P}$.

###### Example 2 (Refactored rule).

Given the rules in $\mathcal{Q}$ and the invented rule *aux* in Example1, the first two rules in $\mathcal{P}$ are refactored rules.

We define a *refactored program*:

###### Definition 4 (Refactored program).

Given a logic program $\mathcal{P}$ and a finite set of invented rules $S$, a logic program $\mathcal{Q}\subseteq\mathcal{P}\cup S\cup\mathcal{I}(\mathcal{P},S)$ is a refactored program of $\mathcal{P}$ iff $\textsf{unfold}(\mathcal{Q}\setminus S,\mathcal{Q}\cap S)=\mathcal{P}$, and we refer to $\mathcal{Q}$ as a *proper refactoring* if $\forall S^{\prime}\subset\mathcal{Q}\cap S$, $\textsf{unfold}(\mathcal{Q}\setminus S,\mathcal{Q}\cap S^{\prime})\neq\mathcal{P}$.

To help explain the intuition behind Definition4, note the following three properties of a refactored program.First, a refactored program consists of three kinds of rules (a) input rules from $\mathcal{P}$,(b) invented rules from $S$,and (c) refactored rules from $\mathcal{I}(\mathcal{P},S)$.Second, $\mathcal{P}$ and $\mathcal{Q}$ must be syntactically equivalent, which means that we can unfold $\mathcal{Q}$ upon the invented rules to get $\mathcal{P}$.Third, we are only interested in programs $\mathcal{Q}$ which are proper refactorings of $\mathcal{P}$ as any non-proper refactoring containing $\mathcal{Q}$ is always a larger program.

We want *optimal knowledge refactoring (OKR)*:

###### Definition 5 (Optimal knowledge refactoring problem).

Given a logic program $\mathcal{P}$ and a finite set of invented rules $S$, the optimal knowledge refactoring problem is to find a refactored program $\mathcal{Q}$ such that for any refactored program $\mathcal{Q^{\prime}}$, $size(\mathcal{Q})\leq size(\mathcal{Q^{\prime}})$.

We prove that the OKR problem is $\mathcal{NP}$-hard:

###### Theorem 1.

The optimal knowledge refactoring problem is $\mathcal{NP}$-hard.

###### Proof (Sketch).

The full proof is in AppendixA. We provide a reduction from *maximum independent set in 3-regular Hamiltonian graphs*(Fleischner, Sabidussi, and Sarvanov 2010) to a propositional variant of OKR where $c_{1},c_{2}\in{S}$ may refactor the same rule of $\mathcal{P}$ iff $\mathit{body}(c_{1})\cap\mathit{body}(c_{2})=\emptyset$ (OPKR_{WD}). Let $G=(V,E)$ be a 3-regular Hamiltonian graph and $C$ a maximum independent set of $G$. Observe that in $G^{\prime}=(V\setminus C,E^{\prime})$, where $E^{\prime}$ contains all edges from $E$ with endpoints in $V\setminus C$, every vertex has degree at most 2. Thus, computing a maximum weighted independent set of $G^{\prime}$ is possible in polynomial time. Furthermore, it can be shown that optimal refactorings of instances derived from a 3-regular Hamiltonian graph $G$ are always a superset of a maximum independent set of $G$. Thus, we can reduce OPKR_{WD} to OPKR, a fragment of OKR.∎

## 4 MaxRefactor

Given an input logic program $\mathcal{P}$, a space of possible invented rules $S$, and a maximum number $K$ of invented rules to consider, MaxRefactor refactors $\mathcal{P}$ by finding ${C}\subseteq{S}$ where $|{C}|=K$.In other words, MaxRefactor solves the optimal knowledge refactoring problem.

Before describing how we search for a refactoring, we first introduce *linear invented rules* to restrict the space of possible invented rules.

### 4.1 Linear Invented Rule

The space of possible invented rules ${S}$ directly influences the effectiveness and efficiency of refactoring.For example, Knorf defines ${S}$ as all subsets of rules in an input program.For instance, if there is an input rule *g(A) $\leftarrow$ p(A,B), q(B,C), r(B,D)*, Knorf adds four possible invented rules to ${S}$:

$\left\{\begin{array}[]{l}\emph{aux${}_{1}$(A,B,C) $\leftarrow$ {p}(A,B), {q}(B,C)}\\\emph{aux${}_{2}$(A,B,D) $\leftarrow$ {p}(A,B), {r}(B,D)}\\\emph{aux${}_{3}$(B,C,D) $\leftarrow$ {q}(B,C), {r}(B,D)}\\\emph{aux${}_{4}$(A,B,C,D) $\leftarrow$ {p}(A,B), {q}(B,C), {r}(B,D)}\end{array}\right\}$ |

Defining ${S}$ this way has the advantage that it restricts which literals can appear in an invented rule.However, this approach also has disadvantages.As we show in Section1, this approach cannot invent a rule that is not a subset of any input rule (e.g. *aux*_{2} in $\mathcal{P}_{3}$), nor can the size of the invented rule be larger than any input rule (e.g., *aux*_{3} in $\mathcal{Q}_{2}$).

To overcome these limitations, we use a new approach to define the space of invented rules.We denote the set of body predicate symbols in the logic program $\mathcal{P}$ as $Pr({\mathcal{P}})$ and a finite set of variables as $\mathcal{X}$.We define the set of all possible body literals $L=\left\{p(v)\mid p\in Pr({\mathcal{P}}),v\in\mathcal{X}^{\mathit{arity}(p)}\right\}$.We define ${S}$ as the space of all combinations of literals from $L$, i.e., the power set of $L$.The size of ${S}$ is exponential in the size of $L$ and $S$ is clearly a superset of the space considered by Knorf.Therefore, our key insight is not to consider all combinations of $L$ and to instead only consider the most general ones.We call such rules *linear invented rules*:

###### Definition 6 (Linear invented rule).

A linear invented rule is rule with linear variable occurrence in its body.

###### Example 3 (Linear invented rule).

The rule *aux _{1}(A,B,C) $\leftarrow$ p(A,B), q(B,C)* is not linear because its body literals share the variable $B$.By contrast, the rule

*aux*is linear.

_{2}(A,B,C,D) $\leftarrow$ p(A,B), q(C,D)For any invented rule $r$, we can always build a linear invented rule $lin(r)$ as follows:(i) rewrite the body literals such that the predicate symbols remain unchanged and the variables occur linearly, then (ii) build a new head literal such that $var(head(lin(r)))=\bigcup_{a\in body(lin(r))}{var(a)}$.

To motivate the use of linear invented rules in refactoring, we show the theorem:

###### Theorem 2.

If the optimal knowledge refactoring problem has a solution using the set of invented rules ${C}\subseteq{S}$ then it has a solution using ${C}^{\prime}\subseteq{S}_{lin}$, where ${S}_{lin}=\left\{lin(s)\mid s\in S\right\}$.

###### Proof.

Suppose there is an invented rule $r\in{C}$, we can always build a linear invented rule $r^{\prime}=lin(r)$, then ${C}^{\prime}={C}\setminus\{r\}\cup\{r^{\prime}\}$.First, observe that $|{C}^{\prime}|\leq|{C}|$ because either an invented rule has a unique linear invented rule generalising it, or multiple invented rules of $C$ are generalised by the same linear invented rule.Second, for any body literal in the input program that is covered by $r$, it can also be covered by $r^{\prime}$ via variable mapping.Thus, we can always get a refactored program of the same or smaller size using ${C}^{\prime}$.∎

Theorem2 implies that we can restrict $S$ to linear invented rules without compromising the optimal solution.As we show below, defining $S$ as linear invented rules allows us to simplify the COP formulation.

### 4.2 COP Encoding

According to Theorem1, the optimal knowledge refactoring problem is $\mathcal{NP}$-hard, making it computationally challenging.Constraint programming (CP) is a successful framework for modeling and solving hard combinatorial problems (Hebrard 2018).Therefore, MaxRefactor formulates this search problem as a constrained optimisation problem (COP) (Rossi, VanBeek, and Walsh 2006).Given (i) a set of *decision variables*, (ii) a set of *constraints*, and (iii) an *objective function*, a COP solver finds an assignment to the decision variables that satisfies all the specified constraints and minimises the objective function.

We describe our COP encoding below^{2}^{2}2COP problems can also be encoded as MaxSAT problems (Bacchus, Järvisalo, and Martins 2021), an optimisation variant of SAT.We also develop a MaxSAT encoding and include it in the appendix..

#### Decision Variables

MaxRefactor builds decision variables to determine (a) which literals are in which invented rules, and (b) how to refactor input rules using the invented rules^{3}^{3}3Knorf is not involved in task (a), as it creates a decision variable for each invented rule in $S$.However, as previously stated, in our definition the size of $S$ is larger than Knorf, even when considering only linear invented rules.To improve scalability, MaxRefactor uses decision variables based on literals to build invented rules.Specifically, we only find which predicate symbols occur in the body of an invented rule, then assign variables linearly..

For task (a), for each possible invented rule $r_{k}$ for $k\in[1,K]$ and $p\in Pr({\mathcal{P}})$, we use an integer variable $\mathit{r}_{k,p}$ to indicate the number of literals with predicate symbol $p$ in the body of $r_{k}$.

For example, consider the input program:

$\mathcal{P}=\left\{\begin{array}[]{l}c_{1}:\emph{g(A)}\leftarrow\emph{p(B)\textsuperscript{\rm a1}, p(C)\textsuperscript{\rm a2}, q(A,B)\textsuperscript{\rm a3}, q(B,C)\textsuperscript{\rm a4}}\\c_{2}:\emph{g(A)}\leftarrow\emph{p(B)\textsuperscript{\rm a1}, q(A,B)\textsuperscript{\rm a2}, q(A,C)\textsuperscript{\rm a3}, s(C)\textsuperscript{\rm a4}}\end{array}\right\}$ |

where the superscript denotes the index of a body literal.For simplicity, assume that $K=1$, i.e. we can only invent one rule denoted $r_{1}$.The decision variables for inventing this rule are $\{\mathit{r}_{1,p}$, $\mathit{r}_{1,q}$, $\mathit{r}_{1,s}\}$.Suppose $\mathit{r}_{1,p}=\mathit{r}_{1,q}=1$ and $\mathit{r}_{1,s}=0$.Then $r_{1}:\emph{aux}_{1}\emph{(A,B,C)}\leftarrow\emph{p(A), q(B,C)}$.

For task (b), MaxRefactor creates two groups of decision variables: (i) $\mathit{use_{c,k}}$ to denote whether an input rule $c$ is refactored with the invented rule $r_{k}$, and (ii) $\mathit{cover_{c,a,k}}$ to denote whether a body literal $a$ in an input rule $c$ is covered by an invented rule $r_{k}$, which means this literal is not in the refactoring of $c$.

Concerning (i), for each input rule $c\in\mathcal{P}$ and $k\in[1,K]$ we use a Boolean variable $\mathit{use}_{c,k}$ to indicate that $c$ is refactored using the invented rule $r_{k}$.However, as we show later in the example, an input rule can be refactored using the same invented rule multiple times.Therefore, we use the Boolean variable $\mathit{use}_{c,k}^{t}$ where $t\geq 1$ to indicate that $r_{k}$ is used $t$ times^{4}^{4}4The domain of $t$ is defined by the maximum number of times a predicate symbol appears in an input rule.in the refactoring of $c$.

Concerning (ii), for each input rule $c\in\mathcal{P}$, literal $a\in body(c)$, $k\in[1,K]$, and integer $t\geq 1$, we use a Boolean variable $\mathit{cover}_{c,a,k}^{t}$ to indicate that $a$ is covered by the $t$-th instance of $r_{k}$ in the refactoring.

Regarding the above example, the decision variables used to determine the refactoring of each input rule are:

$\noindent\begin{array}[]{llll}{\mathit{use}_{c_{1},1}^{1}}&{\mathit{use}_{c_{1},1}^{2}}&{\mathit{use}_{c_{2},1}^{1}}&{\color[rgb]{.75,.75,.75}\definecolor[named]{pgfstrokecolor}{rgb}{.75,.75,.75}\pgfsys@color@gray@stroke{.75}\pgfsys@color@gray@fill{.75}\mathit{use}_{c_{2},1}^{2}}\\\end{array}$ |

$\noindent\begin{array}[]{llll}{\mathit{cover}_{c_{1},{\rm a1},1}^{1}}&{\color[rgb]{.75,.75,.75}\definecolor[named]{pgfstrokecolor}{rgb}{.75,.75,.75}\pgfsys@color@gray@stroke{.75}\pgfsys@color@gray@fill{.75}\mathit{cover}_{c_{1},{\rm a1},1}^{2}}&{\color[rgb]{.75,.75,.75}\definecolor[named]{pgfstrokecolor}{rgb}{.75,.75,.75}\pgfsys@color@gray@stroke{.75}\pgfsys@color@gray@fill{.75}\mathit{cover}_{c_{1},{\rm a2},1}^{1}}&{\mathit{cover}_{c_{1},{\rm a2},1}^{2}}\\{\mathit{cover}_{c_{1},{\rm a3},1}^{1}}&{\color[rgb]{.75,.75,.75}\definecolor[named]{pgfstrokecolor}{rgb}{.75,.75,.75}\pgfsys@color@gray@stroke{.75}\pgfsys@color@gray@fill{.75}\mathit{cover}_{c_{1},{\rm a3},1}^{2}}&{\color[rgb]{.75,.75,.75}\definecolor[named]{pgfstrokecolor}{rgb}{.75,.75,.75}\pgfsys@color@gray@stroke{.75}\pgfsys@color@gray@fill{.75}\mathit{cover}_{c_{1},{\rm a4},1}^{1}}&{\mathit{cover}_{c_{1},{\rm a4},1}^{2}}\\{\mathit{cover}_{c_{2},{\rm a1},1}^{1}}&{\color[rgb]{.75,.75,.75}\definecolor[named]{pgfstrokecolor}{rgb}{.75,.75,.75}\pgfsys@color@gray@stroke{.75}\pgfsys@color@gray@fill{.75}\mathit{cover}_{c_{2},{\rm a1},1}^{2}}&{\mathit{cover}_{c_{2},{\rm a2},1}^{1}}&{\color[rgb]{.75,.75,.75}\definecolor[named]{pgfstrokecolor}{rgb}{.75,.75,.75}\pgfsys@color@gray@stroke{.75}\pgfsys@color@gray@fill{.75}\mathit{cover}_{c_{2},{\rm a2},1}^{2}}\\{\color[rgb]{.75,.75,.75}\definecolor[named]{pgfstrokecolor}{rgb}{.75,.75,.75}\pgfsys@color@gray@stroke{.75}\pgfsys@color@gray@fill{.75}\mathit{cover}_{c_{2},{\rm a3},1}^{1}}&{\color[rgb]{.75,.75,.75}\definecolor[named]{pgfstrokecolor}{rgb}{.75,.75,.75}\pgfsys@color@gray@stroke{.75}\pgfsys@color@gray@fill{.75}\mathit{cover}_{c_{2},{\rm a3},1}^{2}}&{\color[rgb]{.75,.75,.75}\definecolor[named]{pgfstrokecolor}{rgb}{.75,.75,.75}\pgfsys@color@gray@stroke{.75}\pgfsys@color@gray@fill{.75}\mathit{cover}_{c_{2},{\rm a4},1}^{1}}&{\color[rgb]{.75,.75,.75}\definecolor[named]{pgfstrokecolor}{rgb}{.75,.75,.75}\pgfsys@color@gray@stroke{.75}\pgfsys@color@gray@fill{.75}\mathit{cover}_{c_{2},{\rm a4},1}^{2}}\end{array}$ |

If the gray decision variables are false and the rest are true then the refactored rules are:

$\mathcal{Q}=\left\{\begin{array}[]{l}\emph{g(A)}\leftarrow\emph{aux}_{1}\emph{(B,A,B), }\emph{aux}_{1}\emph{(C,B,C)}\\\emph{g(A)}\leftarrow\emph{aux}_{1}\emph{(B,A,B), }\emph{q(A,C), s(C)}\end{array}\right\}$ |

#### Constraints

MaxRefactor ensures the validity of the refactoring through a set of constraints.

We use Constraint(1) to ensure that for $k\in[1,K]$, if the invented rule $r_{k}$ is used $t$ times in the refactoring of $c\in\mathcal{P}$ then $r_{k}$ is used $t-1$ times in the refactoring of $c$.

$\forall c,k,t\,(t>1),\;\mathit{use}_{c,k}^{t}\rightarrow\mathit{use}_{c,k}^{t-1}$ | (1) |

We use Constraint(2) to ensure that for $k\in[1,K]$, if $r_{k}$ is used $t$ times to refactor $c\in\mathcal{P}$, then $r_{k}$ cannot contain a predicate symbol $p$ which is not the predicate symbol of at least one literal in $\mathit{body}(c)$.

$\forall c,p,k,t\,(p{\rm\;not\;in\;}c),\;\neg\left(\mathit{use}_{c,k}^{t}\wedge\mathit{r}_{k,p}>0\right)$ | (2) |

We use Constraint(3) to ensure that for $k\in[1,K]$, that any predicate symbol $p$ of a literal $a$ in the body of $c\in\mathcal{P}$ covered by invented rule $r_{k}$, is the predicate symbol of a literal in the body of $r_{k}$. Below, $e(a,p)$ abbreviates $pred(a)=p$.

$\forall c,a,k,t\,(e(a,p))\;\mathit{cover}_{c,a,k}^{t}\rightarrow\left(\mathit{use}_{c,k}^{t}\wedge\mathit{r}_{k,p}>0\right)$ | (3) |

We use Constraint(4) to ensure that for $k\in[1,K]$ and $c\in\mathcal{P}$, if the predicate symbol $p$ occurs in multiple body literals of $c$ and $r_{k,p}$ times in $r_{k}$ then a single use of $r_{k}$ can only cover $r_{k,p}$ of the occurrences of $p$ in $c$.

$\forall c,p,k,t,\;\left(\sum_{a\in body(c)\wedge e(a,p)}\mathit{cover}_{c,a,k}^{t}\right)\leq\mathit{r}_{k,p}$ | (4) |

Note that if $\mathit{r}_{k,p}$ is larger than the number of body literals with predicate symbol $p$ in a rule, it would still be valid, because these literals can be refactored by duplication.For example, given the input rule *g(A) $\rightarrow$ p(A,B)* and the invented rule *aux(A,B,C,D) $\rightarrow$ p(A,B),p(C,D)*, we can refactor the same literal twice and get *g(A) $\rightarrow$ aux(A,B,A,B)*.

#### Objective

Before introducing our objective function, we first create two useful yet redundant decision variables.First, for each $k\in[1,K]$, we use a Boolean variable $\mathit{used}_{k}$ to indicate whether the invented rule $r_{k}$ is used in the refactoring.Second, for each input rule $c\in\mathcal{P}$ and literal $a\in body(c)$, we use a Boolean variable $\mathit{covered}_{c,a}$ to indicate that $a$ is not in the refactoring of $c$.These variables are directly calculated from $\mathit{use}$ and $\mathit{cover}$ respectively.The constraints are as follows:

$\forall k,\;\left(\mathit{used}_{k}\leftrightarrow\exists c,t,\;\mathit{use}_{c,k}^{t}\right)$ | (5) |

$\forall c,a,\;\left(\mathit{covered}_{c,a}\leftrightarrow\exists k,t,\;\mathit{cover}_{c,a,k}^{t}\right)$ | (6) |

Following Definition5, MaxRefactor searches for the refactored program with the smallest size. In our encoding, MaxRefactor achieves this goal by minimising the following objective function using a COP solver:

$\sum_{k}{\mathit{used}_{k}}+\sum_{k,p}{\mathit{r}_{k,p}}+\sum_{c,k,t}{\mathit{use}_{c,k}^{t}}-\sum_{c,a}{\mathit{covered}_{c,a}}$ | (7) |

The first two terms are the size of the head and body literals used to define the invented rules added to the refactored program respectively.The third term is the size of the invented rules in the refactored rules of the input program.The last term is the total number of covered body literals in the input program, which should be deducted from the size.Note that by adding the objective to the size of the input program, we get the size of the refactored program.

### 4.3 The Optimal Refactoring

We show that by solving the aforementioned COP problem, MaxRefactor solves the OKR problem (Definition5):

###### Theorem 3.

Let $\mathcal{P}$ be a logic program such that $|\mathcal{P}|=n$ and ${S}$ a set of invented rules.Then MaxRefactor can solve the OKR problem with$K=\left\lceil\frac{n}{4}\right\rceil\left\lceil\frac{s-1}{2}\right\rceil$,where $s=\max\{\mathit{size}(c)\mid c\in\mathcal{P}\}$.

The proof is shown in AppendixB.

## 5 Experiments

To test our claim that MaxRefactor can find better refactorings than current approaches, our experiments aim to answer the question:

Q1: How does MaxRefactor compare against Knorf on the optimal knowledge refactoring problem?

To answer Q1, we compare the compression rate of the refactored programs produced by MaxRefactor and Knorf given the same timeout.

To understand the scalability of MaxRefactor, our experiments aim to answer the question:

Q2: How does MaxRefactor scale on larger and more diverse input programs?

To answer Q2, we evaluate the runtime and refactoring rate of MaxRefactor on progressively larger programs.

##### Settings

We use two versions of MaxRefactor: one which uses the COP solver CP-SAT^{5}^{5}5https://developers.google.com/optimization and another which uses the MaxSAT solver UWrMaxSat(Piotrów 2020).We consider at most two invented rules in a refactoring, as we found it leads to good results.For Knorf, we run the code from its repository^{6}^{6}6https://github.com/sebdumancic/knorf_aaai21 and use the same parameters as stated in that paper.All experiments use a computer with 5.2GHz CPUs and 16GB RAM.We run MaxRefactor and Knorf on a single CPU.

##### Reproducibility

The experimental data and the code to reproduce the results is in the appendix and will be made publicly available if the paper is accepted for publication.

### 5.1 Q1: Comparison with Knorf

##### Datasets

We use two datasets, Lego and Strings, from the Knorf paper.Lego has 200 programs, each with a size of 264–411.Strings has 196 programs, each with a size of 779–12,309.We tried using Knorf on other datasets but it frequently crashed on them^{7}^{7}7Knorf crashes on other datasets during its COP modeling phase.We contacted the authors of Knorf for advice and they confirmed that it is a fundamental bug related to incorrect assumptions on the input program.We tried to fix the bug according to their advice but it fixed only a small number of crashes and the runtime significantly increased.Therefore, we only report the results on Lego and Strings from the Knorf paper. so we only report the results on these two datasets.

##### Method

We compare the compression rate of the refactored programs returned by MaxRefactor and Knorf.Given an input program $\mathcal{P}$ and a refactored program $\mathcal{Q}$, the compression rate $cr$ is defined as:

$cr=\frac{size(\mathcal{P})-size(\mathcal{Q})}{size(\mathcal{P})}$ |

To be clear, a larger $cr$ indicates better refactoring.

##### Results

Figure1 compares the compression rates of MaxRefactor and Knorf with the same timeout. Both approaches call CP-SAT.For Lego, MaxRefactor can increase compression rates by at least 27% across all programs.For Strings, MaxRefactor finds better refactorings on 172 out of 196 programs, increasing the compression rates by an average of 30%.In the remaining 24 programs, both MaxRefactor and Knorf fail to find any refactoring in half of them, while in the other half MaxRefactor’s compression rates are on average 10% lower than Knorf’s.

We also use a version of MaxRefactor that uses a MaxSAT solver (See AppendixC for encoding details).Figure2 shows the results when compared to Knorf.Given the same timeout (600s), MaxRefactor with MaxSAT achieves better refactoring effect compared to CP-SAT.For Strings, MaxRefactor finds better refactorings than Knorf on all the 196 programs, increasing the compression rates by 30%–60% with an average of 53%.

In conclusion, these results suggest that MaxRefactor performs substantially better than Knorf at solving the knowledge refactoring problem, thereby answering Q1.

### 5.2 Q2: Scalability

##### Datasets.

We evaluate the scalability of MaxRefactor on three real-world problems: DrugDrug (Dhami etal. 2018), Alzheimer (King, Sternberg, and Srinivasan 1995), and WordNet (Bordes etal. 2014).Each dataset contains millions of rules so we can randomly sample programs of distinct sizes from them and use the sampled programs to test the performance of MaxRefactor.Specifically, for every size in {200, 250, 300, …, 1000}, we uniformly sample 10 programs from each dataset and run MaxRefactor (calling MaxSAT) to refactor them.

##### Results.

Figure3(a) shows the runtime required for MaxRefactor to find an optimal solution and prove optimality for different sizes of input programs. The timeout for a single run is set to 3600s and the runtime is counted as 3600s if the limit is exceeded.For a program size of 400, the average runtime on all the datasets is less than 500s.The performance varies across different datasets.For DrugDrug and Alzheimer, MaxRefactor can scale to programs with a size of 600 and find the optimal refactorings.For WordNet, it can scale to programs with a size of 1000.

Figure3(a) shows the runtime required for MaxRefactor to solve the optimal solution for different sizes of input programs, i.e. to find an optimal refactoring and, crucially, prove optimality.However, in most cases, MaxRefactor can find near-optimal refactorings quickly but takes a while to prove optimality.To illustrate this behaviour, we select 3 challenging programs from different datasets and measure the sizes of the best refactored programs found by MaxRefactor under different time limits.Compared to the runtime to prove the optimal refactorings (opt-time), MaxRefactor can find refactored programs within 20% of the final size with only 44% (Figure3(b)), 0.2% (Figure3(c)), and 0.3% (Figure3(d)) of the runtime respectively.For example, in the DrugDrug task (Figure3(c)), MaxRefactor finds a refactoring of size 630 within 300s, while it takes 2,440 minutes to prove that the optimal refactoring size is 623.These results indicate that MaxRefactor in practice quickly finds a near-optimal refactoring but takes time to prove optimality.

In conclusion, these results suggest that MaxRefactor scales well on large and diverse programs from real-world problems, thereby answering Q2.

## 6 Conclusion and Limitations

We have introduced a novel approach to solve the optimal knowledge refactoring problem, which refactors a logic program to reduce its size.We implemented our ideas in MaxRefactor, which formulates this problem as a COP.Our experiments show that MaxRefactor drastically reduces the sizes of refactored programs compared to the state-of-the-art approach.Our results also show that MaxRefactor exhibits scalability on different programs from real-world problems.

##### Limitations

We do not support recursive invented rules or including other invented predicates in an invented rule.Doing so may lead to better refactoring.Future work should address this limitation.

## References

- Bacchus, Järvisalo, and Martins (2021)Bacchus, F.; Järvisalo, M.; and Martins, R. 2021.Maximum Satisfiabiliy.In
*Handbook of Satisfiability - Second Edition*, 929–991. IOS Press. - Berge (1989)Berge, C. 1989.
*Hypergraphs - combinatorics of finite sets*, volume45.North-Holland. - Bessiere, Coletta, and Petit (2007)Bessiere, C.; Coletta, R.; and Petit, T. 2007.Learning Implied Global Constraints.In
*IJCAI 2007*, 44–49. - Beth, Jungnickel, and Lenz (1986)Beth, T.; Jungnickel, D.; and Lenz, H. 1986.
*Design Theory*.Cambridge University Press. - Bonet, Drexler, and Geffner (2024)Bonet, B.; Drexler, D.; and Geffner, H. 2024.On Policy Reuse: An Expressive Language for Representing and Executing General Policies that Call Other Policies.In
*ICAPS 2024*, 31–39. - Bordes etal. (2014)Bordes, A.; Glorot, X.; Weston, J.; and Bengio, Y. 2014.A Semantic Matching Energy Function for Learning with Multi-relational Data - Application to Word-sense Disambiguation.
*Mach. Learn.*, 94(2): 233–259. - Bowers etal. (2023)Bowers, M.; Olausson, T.X.; Wong, L.; Grand, G.; Tenenbaum, J.B.; Ellis, K.; and Solar-Lezama, A. 2023.Top-Down Synthesis for Library Learning.In
*POPL 2023*, 1182–1213. - Cadoli and Mancini (2006)Cadoli, M.; and Mancini, T. 2006.Automated Reformulation of Specifications by Safe Delay of Constraints.
*Artif. Intell.*, 170(8-9): 779–801. - Cao etal. (2023)Cao, D.; Kunkel, R.; Nandi, C.; Willsey, M.; Tatlock, Z.; and Polikarpova, N. 2023.Babble: Learning Better Abstractions with E-graphs and Anti-unification.In
*POPL 2023*, 396–424. - Cerna and Cropper (2024)Cerna, D.M.; and Cropper, A. 2024.Generalisation through Negation and Predicate Invention.In
*AAAI 2024*, 10467–10475. - Charlier etal. (2017)Charlier, B.L.; Khong, M.T.; Lecoutre, C.; and Deville, Y. 2017.Automatic Synthesis of Smart Table Constraints by Abstraction of Table Constraints.In
*IJCAI 2017*, 681–687. - Charnley, Colton, and Miguel (2006)Charnley, J.; Colton, S.; and Miguel, I. 2006.Automatic Generation of Implied Constraints.In
*ECAI 2006*, 73–77. - DeRaedt etal. (2008)DeRaedt, L.; Kersting, K.; Kimmig, A.; Revoredo, K.; and Toivonen, H. 2008.Compressing Probabilistic Prolog Programs.
*Mach. Learn.*, 70(2): 151–168. - Dhami etal. (2018)Dhami, D.S.; Kunapuli, G.; Das, M.; Page, D.; and Natarajan, S. 2018.Drug-drug Interaction Discovery: Kernel Learning from Heterogeneous Similarities.
*Smart Health*, 9: 88–100. - Drake etal. (2002)Drake, L.; Frisch, A.; Gent, I.; and Walsh, T. 2002.Automatically Reformulating SAT-encoded CSPs.In
*Workshop on Reformulating Constraint Satisfaction Problems*. - Dumančić, Guns, and Cropper (2021)Dumančić, S.; Guns, T.; and Cropper, A. 2021.Knowledge Refactoring for Inductive Program Synthesis.In
*AAAI 2021*, 7271–7278. - Dumančić etal. (2019)Dumančić, S.; Guns, T.; Meert, W.; and Blockeel, H. 2019.Learning Relational Representations with Auto-encoding Logic Programs.In
*IJCAI 2019*, 6081–6087. - Eén and Sörensson (2006)Eén, N.; and Sörensson, N. 2006.Translating pseudo-Boolean constraints into SAT.
*J. Satisf. Boolean Model. Comput.*, 2(1-4): 1–26. - Ellis etal. (2018)Ellis, K.; Morales, L.; Sablé-Meyer, M.; Solar-Lezama, A.; and Tenenbaum, J. 2018.Learning Libraries of Subroutines for Neurally-Guided Bayesian Program Induction.In
*NeurIPS 2018*, 7816–7826. - Fleischner, Sabidussi, and Sarvanov (2010)Fleischner, H.; Sabidussi, G.; and Sarvanov, V.I. 2010.Maximum Independent Sets in 3- and 4-regular Hamiltonian Graphs.
*Discret. Math.*, 310(20): 2742–2749. - Goddard and Henning (2013)Goddard, W.; and Henning, M.A. 2013.Independent domination in graphs: A survey and recent results.
*Discret. Math.*, 313(7): 839–854. - Hebrard (2018)Hebrard, E. 2018.Reasoning about NP-complete Constraints.In
*IJCAI 2018*, 5672–5676. - Heule etal. (2015)Heule, M.; Järvisalo, M.; Lonsing, F.; Seidl, M.; and Biere, A. 2015.Clause Elimination for SAT and QSAT.
*J. Artif. Intell. Res.*, 53: 127–168. - Hocquette, Dumancic, and Cropper (2024)Hocquette, C.; Dumancic, S.; and Cropper, A. 2024.Learning Logic Programs by Discovering Higher-Order Abstractions.In
*IJCAI-24*, 3421–3429. - Hocquette and Muggleton (2020)Hocquette, C.; and Muggleton, S.H. 2020.Complete Bottom-Up Predicate Invention in Meta-Interpretive Learning.In
*IJCAI 2020*, 2312–2318. - Jain etal. (2021)Jain, A.; Gautrais, C.; Kimmig, A.; and Raedt, L.D. 2021.Learning CNF Theories Using MDL and Predicate Invention.In
*IJCAI 2021*, 2599–2605. - King, Sternberg, and Srinivasan (1995)King, R.D.; Sternberg, M. J.E.; and Srinivasan, A. 1995.Relating Chemical Activity to Structure: An Examination of ILP Successes.
*New Gener. Comput.*, 13(3&4): 411–433. - Kok and Domingos (2007)Kok, S.; and Domingos, P.M. 2007.Statistical Predicate Invention.In
*ICML 2007*, 433–440. - Kramer (2020)Kramer, S. 2020.A Brief History of Learning Symbolic Higher-Level Representations from Data (And a Curious Look Forward).In
*IJCAI 2020*, 4868–4876. - Liberti (2012)Liberti, L. 2012.Reformulations in Mathematical Programming: Automatic Symmetry Detection and Exploitation.
*Math. Program.*, 131(1-2): 273–304. - Lloyd (2012)Lloyd, J.W. 2012.
*Foundations of Logic Programming*.Springer Science & Business Media. - Mears and dela Banda (2015)Mears, C.; and dela Banda, M.G. 2015.Towards Automatic Dominance Breaking for Constraint Optimization Problems.In
*IJCAI 2015*, 360–366. - Muggleton, Lin, and Tamaddoni-Nezhad (2015)Muggleton, S.H.; Lin, D.; and Tamaddoni-Nezhad, A. 2015.Meta-interpretive Learning of Higher-order Dyadic Datalog: Predicate Invention Revisited.
*Mach. Learn.*, 100(1): 49–73. - Nienhuys-Cheng and Wolf (1997)Nienhuys-Cheng, S.-H.; and Wolf, R.d. 1997.
*Foundations of Inductive Logic Programming*.Springer. - O’Sullivan (2010)O’Sullivan, B. 2010.Automated Modelling and Solving in Constraint Programming.In
*AAAI 2010*, 1493–1497. - Piotrów (2020)Piotrów, M. 2020.Uwrmaxsat: Efficient Solver for MaxSAT and Pseudo-Boolean Problems.In
*ICTAI 2020*, 132–136. - Plotkin (1971)Plotkin, G. 1971.
*Automatic Methods of Inductive Inference*.Ph.D. thesis, Edinburgh University. - Rosenfeld (1964)Rosenfeld, M. 1964.Independent sets in regular graphs.
*Israel Journal of Mathematics*, 2(4): 262–272. - Rossi, VanBeek, and Walsh (2006)Rossi, F.; VanBeek, P.; and Walsh, T. 2006.
*Handbook of Constraint Programming*.Elsevier. - Rumelhart and Norman (1976)Rumelhart, D.E.; and Norman, D.A. 1976.Accretion, Tuning and Restructuring: Three Modes of Learning.
- Shmueli (1987)Shmueli, O. 1987.Decidability and Expressiveness Aspects of Logic Queries.In
*PODS 1987*, 237–249. - Silver etal. (2023)Silver, T.; etal. 2023.Predicate Invention for Bilevel Planning.In
*AAAI 2023*, 12120–12129. - Tamaki and Sato (1984)Tamaki, H.; and Sato, T. 1984.Unfold/Fold Transformation of Logic Programs.In
*ICLP 1984*, 127–138. - Vo (2020)Vo, H.-P. 2020.Reformulations of Constraint Satisfaction Problems: A Survey.

## Appendix A Proof of Theorem1

###### Definition 7 (Independent set).

Let $G=(V,E)$ be a graph. A set of vertices $S\subseteq V$ is an *independent set* if for all $u,v\in S$, $(u,v)\not\in E$. By $I(G)$ we denote the set of independent sets of $G$. We refer to $S$ as *maximal* if for all $v\in V\setminus S$, $S\cup\{v\}$ is not an independent set. By $I_{m}(G)$ we denote the set of maximal independent sets of $G$. The *independence number* of a graph $G$, denoted by $\alpha(G)$=$\max\{|S|\mid S\in I_{m}(G)\}$. The *independence dominating number* of a graph $G$, denoted by $i(G)=\min\{|S|\mid S\in I_{m}(G)\}$.

###### Definition 8 ($\Delta$-regular).

A graph $G=(V,E)$ is $\Delta$-regular if for every $v\in V$, ¸$\mathit{degree}(v,G)=\Delta$.

Let MIS^{3} be defined as follows:

Maximum Independent Set in 3-regular HamiltonianGraphs (MIS^{3})Given: A 3-regular Hamiltonian graph and an integer $k\geq 0$Problem: Is there a maximum independent set in $G$ of size $\geq k$.

Fleischner, Sabidussi, and Sarvanov (2010) proved that MIS^{3} is $\mathcal{NP}$-complete.

Optimal Propositional Knowledge Refactoringwithout Duplication (OPKR_{WD})Given: A propositional definite logic program $P$, and a finite set of invented rules $C$Problem: Find a proper propositional refactored program $P^{\prime}$ such that for any propositional refactored program $Q$, $size(P^{\prime})\leq size(Q)$, and for $C^{\prime}\subseteq C$, where $\textsf{unfold}(P^{\prime}\setminus C^{\prime},P^{\prime}\cap C^{\prime})=P$, the following condition holds for $P^{\prime}$:• if there exists $c_{1},c_{2}\in C^{\prime}$ such that $\mathit{body}(c_{1})\cap\mathit{body}(c_{2})\not=\emptyset$ then for all $r\in P^{\prime}$, $\{\mathit{pred}(c_{1}),\mathit{pred}(c_{2})\}\not\subseteq\mathit{body}(r)$.

###### Definition 9 ($G$-induced refactoring problem).

Let $G=(V,E)$ be a 3-regular Hamiltonian graph. Then the *G-Induced Refactoring Problem*, $\mathit{induced}(G)=(P,C,g,h)$, is defined as follows:

- •
$B=\{b_{1},\cdots,b_{|E|}\}$ where for all $1\leq i<j\leq|E|$, $b_{i}\not=b_{j}$

- •
$C=\{c_{1},\cdots,c_{|E|}\}$ where for all $1\leq i<j\leq|E|$, $c_{i}\not=c_{j}$

- •
$A=\{a_{1},\cdots,a_{|V|}\}$ where for all $1\leq i<j\leq|V|$, $a_{i}\not=a_{j}$

- •
$g:V\mapsto A$ and $h:E\mapsto B$ are bijections

- •
$P=\{P_{1},\cdots,P_{|E|}\}$ be a set of propositional definite rules such that for all $1\leq i\leq|E|$, $\mathit{body}(P_{i})=B\cup\{c_{i}\}$.

- •
$C=\{C_{1},\cdots,C_{|V|}\}$ be a set of propositional definite rules such that the following holds:

- –
for all $c,c^{\prime}\in C$, $\mathit{pred}(c)\not=\mathit{pred}(c^{\prime})$,

- –
for all $c\in C$, $\mathit{pred}(c)\in A$, and

- –
For all $c\in C$, $\mathit{body}(c)\subseteq B$ and for all $d\in B$, $d\in\mathit{body}(c)$ iff there exists $v\in V$ such that $h^{-1}(d)=(v,g^{-1}(\mathit{pred}(c)))\in E$.

- –

###### Lemma 1.

OPKR_{WD} is $\mathcal{NP}$-hard.

###### Proof.

We consider a reduction from MIS^{3}. Let $G=(V,E)$ be a 3-regular hamiltonian graph, $\mathit{induce}(G)=(P,C,g,h)$ is a $G$-induced refactoring problem, and $P^{\prime}$ a solution to OPKR_{WD} for $P$ and $C$ where for $C^{\prime}\subseteq C$, $\textsf{unfold}(P^{\prime}\setminus C^{\prime},P^{\prime}\cap C^{\prime})=P$ . Observe that for all $c_{1},c_{2}\in C^{\prime}$, $\mathit{body}(c_{1})\cap\mathit{body}(c_{2})=\emptyset$ because all $p\in P$, $\bigcup_{c\in C}\mathit{body}(c)\subset\mathit{body}(p)$. Hence, $S=\{g^{-1}(\mathit{pred}(c))\mid c\in C\}$ forms an independent set of $G$, i.e. $S\in I(G)$. Furthermore, $S\in I_{m}(G)$, i.e. $S$ is maximal, because if $S\not\in I_{m}(G)$, then there exists a $W\subset V$ such that $S\cup W\in I_{m}(G)$ and $C^{\prime}\cup\{g(w)\mid w\in W\}$ would result in a program $P^{\prime\prime}$ that is a better refactoring of $P$. Furthermore, $S$ must be maximum, as otherwise, there would exists $S^{\prime}\in I_{m}(G)$ such that $|S^{\prime}|\geq|S|$, and $C^{\prime\prime}=\{g(w)\mid w\in S^{\prime}\}$ would result in a program $P^{\prime\prime}$ that is a better refactoring of $P$.∎

### A.1 Optimal Refactoring With Duplication

Proposition 2.1 ofGoddard and Henning (2013) presents a lower bound on the independent dominating number (minimum maximal independent set) of graphs with maximum degree $\Delta$:

###### Proposition 1 ((Berge 1989; Goddard and Henning 2013)).

For a graph $G$ with $n$ vertices and maximum degree $\Delta$,

$\left\lceil\frac{n}{\Delta+1}\right\rceil\leq i(G)\leq n-\Delta$ |

Observe that this proposition holds for $\Delta$-regular graphs as they are a special case of graphs with maximum degree $\Delta$. Furthermore, Observation 4.1 presented inGoddard and Henning (2013), puts an upper bound on the size of the maximum independent set of a regular graph.

###### Observation 1 ((Goddard and Henning 2013; Rosenfeld 1964)).

If $G$ is a regular graph on $n$ vertices with no isolated vertex, then $i(G)\leq\alpha(G)\leq\frac{n}{2}$.

Optimal Propositional Knowledge Refactoringwith Duplication (OPKR)Given: A propositional definite logic program $P$, and a finite set of invented rules $C$Problem: Find a proper propositional refactored program $P^{\prime}$ such that for any propositional refactored program $Q$, $size(P^{\prime})\leq size(Q)$.

###### Lemma 2.

Let $G=(V,E)$ be a 3-regular Hamiltonian graph and $\mathit{induce}(G)=(P,C,g,h)$ a $G$-induced refactoring problem. Then every solution $P^{\prime}$ to OPKR with input $P$ and $C$ where for $C^{\prime}\subseteq C$, $\textsf{unfold}(P^{\prime}\setminus C^{\prime},P^{\prime}\cap C^{\prime})=P$, $C^{\prime}$ is a superset of a set $C^{\prime\prime}\subseteq C$ where $P^{\prime\prime}$ is solution to OPKR_{WD} with input $P$ and $C$ and $\textsf{unfold}(P^{\prime\prime}\setminus C^{\prime\prime},P^{\prime\prime}\cap C^{\prime\prime})=P$.

###### Proof.

To simplify the proof we will refer to solutions of OPKR_{WD} and OPKR by the subset of $C$ which is needed to unfold the resulting program to the input program. First let us make the following four observations:

- A)
for any $c_{1},c_{2}\in C$, where $c_{1}\not=c_{2}$ and $\mathit{body}(c_{1})\cap\mathit{body}(c_{2})=\emptyset$, the refactoring of $P$ using $\{c_{1},c_{2}\}$ results in a program $P^{\prime}$ such that $\mathit{size}(P^{\prime})=\mathit{size}(P)-(4|E|-8)$.

- B)
for any $c_{1},c_{2}\in C$, where $c_{1}\not=c_{2}$ and $|\mathit{body}(c_{1})\cap\mathit{body}(c_{2})|=1$ the refactoring of $P$ using $\{c_{1},c_{2}\}$ results in a program $P^{\prime}$ such that $\mathit{size}(P^{\prime})=\mathit{size}(P)-(3|E|-8)$.

- C)
for any $c_{1},c_{2}\in C$, where $c_{1}\not=c_{2}$ and $|\mathit{body}(c_{1})\cap\mathit{body}(c_{2})|=2$ the refactoring of $P$ using $\{c_{1},c_{2}\}$ results in a program $P^{\prime}$ such that $\mathit{size}(P^{\prime})=\mathit{size}(P)-(2|E|-8)$.

- D)
for any mutually distinct $c_{1},c_{2},c_{3}\in C$, where $|\mathit{body}(c_{1})\cap\mathit{body}(c_{2})|=1$, $|\mathit{body}(c_{1})\cap\mathit{body}(c_{3})|=1$, and $\mathit{body}(c_{2})\cap\mathit{body}(c_{3})=\emptyset$, the refactoring of $P$ using $\{c_{1},c_{2},c_{3}\}$ results in a program $P^{\prime}$ such that $\mathit{size}(P^{\prime})=\mathit{size}(P)-(4|E|-12)$.

These observations imply that a solution $S^{\prime}$ to OPKR should contain the maximal number of rules in $C$ which have no overlap, i.e. a maximal independent set of $G$. By Proposition1, the subset $S$ of $S^{\prime}$ corresponding to a maximal independent set of $G$ must have size at least $\left\lceil\frac{|V|}{4}\right\rceil$.

Now, let $G^{\prime}=(V^{\prime},E^{\prime})$ be the graph derived from $G$ by removing all vertices and edges associated with rules of $S$and all isolated vertices, i.e. isolated vertices represent rules of $C$ whose body literals are covered by other rules (observation (D)). Then by Observation1, the maximum independent set of $G^{\prime}$ is bounded from above by

$\left\lceil\frac{|V|-\left\lceil\frac{|V|}{4}\right\rceil}{2}\right\rceil=\left\lceil\frac{3|V|}{8}\right\rceil.$ |

While the maximum independent set of $G^{\prime}$ does not correspond to an optimal refactoring, it upper bounds the size of the optimal refactoring.

Observe that the size of $|S|=\left\lceil\frac{|V|}{4}+k\right\rceil$ for some $0\leq k\leq|V|-\frac{|V|}{4}$. Thus,

$|S^{\prime}\setminus S|\leq\left\lceil\frac{|V|-\left\lceil\frac{|V|+4k}{4}\right\rceil}{2}\right\rceil=\left\lceil\frac{3|V|-4k}{8}\right\rceil.$ |

Furthermore taking the difference of the bound for $k$ and $k+1$ results in the following

$\left\lceil\frac{3|V|-4k}{8}\right\rceil-\left\lceil\frac{3|V|-4(k+1)}{8}\right\rceil=\left\lceil\frac{1}{2}\right\rceil=1.$ |

Thus, increasing $k$ by 1 may result in a decrease in the maximum size of $S^{\prime}\setminus S$ by $1$. Increasing $k$ by 2 will result in a decrease in the maximum size of $S^{\prime}\setminus S$ by $1$. By the observations highlighted above, this implies a decrease in the size of $P$ by at least $2|E|-4-(|E|-4)=|E|$ when $k$ is increased by 1 and a decrease in the size of $P$ by $4|E|-8-(|E|-4)=3|E|-4$ when $k$ is increased by 2. Thus, increasing $k$ always results in a better refactoring of $P$. Hence, $S^{\prime}$ is optimal when it contains a maximum independent set, i.e. $S$ has to be associated with a maximum independent set of $G$.∎

###### Lemma 3.

Let $G=(V,E)$ be a 3-regular Hamiltonian graph, $\mathit{induce}(G)=(P,C,g,h)$ a $G$-induced refactoring problem, and $P^{\prime}$ a solution to OPKR for input $P$ and $C$ where for $S\subseteq C$, $\textsf{unfold}(P^{\prime}\setminus S,P^{\prime}\cap S)=P$ . Then for every rule $c\in S$, $|\{b\mid b\in\mathit{body}(c)\cap\mathit{body}(c^{\prime})\wedge c^{\prime}\in S\wedge c\not=c^{\prime}\}|\leq 1$.

###### Proof.

By the observations presented in the proof of Lemma2, if $\{b\mid b\in\mathit{body}(c)\cap\mathit{body}(c^{\prime})\wedge c^{\prime}\in S\wedge c\not=c^{\prime}\}>1$, then including $c$ actually reduces the quality of the refactoring implying that $S$ is not optimal.∎

###### Lemma 4.

OPKR is $\mathcal{NP}$-hard.

###### Proof.

We can reduce OPKR_{WD} to OPKR. Let $G=(V,E)$ be a 3-regular Hamiltonian graph, $\mathit{induce}(G)=(P,C,g,h)$ a $G$-induced refactoring problem, and $P^{\prime}$ an OPKR solution for input P and C where for $S^{\prime}\subseteq C$, $\textsf{unfold}(P^{\prime}\setminus S^{\prime},P^{\prime}\cap S^{\prime})=P$. By Lemma2, we know $S^{\prime}$ contains a subset $S$ such that for a solution $P^{\prime\prime}$ of OPKR_{WD}, $\textsf{unfold}(P^{\prime\prime}\setminus S,P^{\prime}\cap S)=P$ and by Lemma3, for any rule $c\in S^{\prime}\setminus S$, $|\{b\mid b\in\mathit{body}(c)\cap\mathit{body}(c^{\prime})\wedge c^{\prime}\in S\}|=1$. Now let$S^{*}=\{c\mid|\mathit{body}(c)\cap\mathit{body}(c^{\prime})|=1\wedge c,c^{\prime}\in S\wedge c^{\prime}\not=c\}$. Observe that $S^{*}$ has an even number of members. Thus, we can define $S^{*}=S_{1}\cup S_{2}$ such that $S_{1}\cap S_{2}=\emptyset$ and $|S_{1}|=|S_{2}|$. A solution to OPKR_{WD} is $Q$ such that $\textsf{unfold}(P^{\prime}\setminus(S\setminus S_{1}),P^{\prime}\cap(S\setminus S_{1}))=P$ (or $\textsf{unfold}(P^{\prime}\setminus(S\setminus S_{2}),P^{\prime}\cap(S\setminus S_{2}))=P$).∎

Figure4 shows an example OPKR encoding.

Optimal Knowledge Refactoring (OKR)Given: A definite logic program $P$, and a finite set of invented rules $C$Problem: Find a refactored program $\mathcal{Q}$ such that for any refactored program $\mathcal{Q^{\prime}}$, $size(\mathcal{Q})\leq size(\mathcal{Q^{\prime}})$

###### Theorem 4.

OKR is $\mathcal{NP}$-hard.

###### Proof.

Observe that OPKR is a subproblem of OKR.∎

$\displaystyle\mathcal{P}=$ | $\displaystyle\left\{\begin{array}[]{c}p\leftarrow b_{1},\cdots,b_{12},c_{1}\\\vdots\\p\leftarrow b_{1},\cdots,b_{12},c_{12}\\\end{array}\right\}$ | ||

$\displaystyle{S}=$ | $\displaystyle\left\{\begin{array}[]{ll}a_{1}\leftarrow b_{1},b_{2},b_{3}&a_{2}\leftarrow b_{4},b_{5},b_{6}\\a_{3}\leftarrow b_{4},b_{7},b_{8}&a_{4}\leftarrow b_{5},b_{7},b_{9}\\a_{5}\leftarrow b_{1},b_{10},b_{11}&a_{6}\leftarrow b_{2},b_{9},b_{10}\\a_{7}\leftarrow b_{6},b_{11},b_{12}&a_{8}\leftarrow b_{3},b_{8},b_{12}\\\end{array}\right\}$ | ||

$\displaystyle\text{OPKR}=$ | $\displaystyle\left\{\begin{array}[]{cc}a_{1}\leftarrow b_{1},b_{2},b_{3}&a_{3}\leftarrow b_{4},b_{7},b_{8}\\a_{4}\leftarrow b_{5},b_{7},b_{9}&a_{7}\leftarrow b_{6},b_{11},b_{12}\\\end{array}\right\}$ |

## Appendix B Proof of Theorem3

Let us consider an invented rule $r\in{S}$ which minimally reduces the size of logic program $\mathcal{P}$ through refactoring. $r$ has the following properties: (i) $\mathit{body}(r)=2$, (ii) there exists $R\subseteq\mathcal{P}$ such that $|R|=4$, (iii) refactoring $R$ by $r$ results in a programs $R^{\prime}$ such that $\mathit{size(R^{\prime})}\leq\mathit{size(R)}$. We refer to such $r\in{S}$ as $\mathcal{P}$-minimal. Note, if $|R|<4$, then refactoring by $r\in{S}$ ($\mathit{body}(r)=2$) does not reduce the size of $R$.

If refactoring of $\mathcal{P}$ using $S^{\prime}\subset{S}$, where all $r\in S^{\prime}$ are $\mathcal{P}$-minimal, results in a program $\mathcal{P}^{\prime}$ where for every $c^{\prime}\in\mathcal{P}^{\prime}$, $\mathit{body}(c^{\prime})\subseteq\{\mathit{pred}(r)\mid r\in S^{\prime}\}$, then each rule $c^{*}\in\mathcal{P}$ is refactored by at most $\left\lceil\frac{s-1}{2}\right\rceil$ $\mathcal{P}$-minimal invented rules. In the worst case, each invented rule in $S^{\prime}$ will refactor precisely 4 rules of $\mathcal{P}$, thus we will need $K=\left\lceil\frac{n}{4}\right\rceil\left\lceil\frac{s-1}{2}\right\rceil$.

If we refactor by invented rules which are not $\mathcal{P}$-minimal, but individually reduce the size of $\mathcal{P}$, fewer invented rules are needed to refactor $\mathcal{P}$ into a program $\mathcal{P}^{\prime}$ with the outlined properties because (i) $\leq\left\lceil\frac{s-1}{m}\right\rceil$ invented rules with body size $m\geq 2$ are needed to cover the rules in $\mathcal{P}$ and (ii) larger invented rules need to occur $2+\max\{0,4-m\}$ times in $\mathcal{P}$ to reduce the size of $\mathcal{P}$. Observe that $\left\lceil\frac{n}{2+\max\{0,4-m\}}\right\rceil\left\lceil\frac{s-1}{m}\right\rceil\leq\left\lceil\frac{n}{4}\right\rceil\left\lceil\frac{s-1}{2}\right\rceil$ for $m\geq 2$ and thus the right side is an upper bound.

Note that the bound is tight as there exists programs whose optimal refactoring requires precisely $\left\lceil\frac{n}{4}\right\rceil\left\lceil\frac{s-1}{2}\right\rceil$ invented rules.Consider the following propositional program:

$\mathcal{P}=\left\{\begin{array}[]{l}p\leftarrow q_{1},q_{2},q_{5},q_{6},q_{7},q_{8}\\p\leftarrow q_{1},q_{2},q_{3},q_{4},q_{5},q_{6}\\p\leftarrow q_{1},q_{2},q_{3},q_{4},q_{7},q_{8}\\p\leftarrow q_{11},q_{12},q_{15},q_{16},q_{17},q_{18}\\p\leftarrow q_{13},q_{14},q_{15},q_{16},q_{17},q_{18}\\p\leftarrow q_{11},q_{12},q_{13},q_{14},q_{17},q_{18}\\p\leftarrow q_{11},q_{12},q_{13},q_{14},q_{15},q_{16}\\p\leftarrow q_{9},q_{10},q_{11},q_{12},q_{17},q_{18}\\p\leftarrow q_{9},q_{10},q_{13},q_{14},q_{15},q_{16}\\p\leftarrow q_{3},q_{4},q_{7},q_{8},q_{9},q_{10}\\p\leftarrow q_{1},q_{2},q_{5},q_{6},q_{9},q_{10}\\p\leftarrow q_{3},q_{4},q_{5},q_{6},q_{7},q_{8}\\\end{array}\right\}$ |

The considered set of invented rules is as follows:

$R=\left\{\begin{array}[]{ll}p_{1}\leftarrow q_{1},q_{2}&p_{2}\leftarrow q_{3},q_{4}\\p_{3}\leftarrow q_{5},q_{6}&p_{4}\leftarrow q_{7},q_{8}\\p_{5}\leftarrow q_{9},q_{10}&p_{6}\leftarrow q_{11},q_{12}\\p_{7}\leftarrow q_{13},q_{14}&p_{8}\leftarrow q_{15},q_{16}\\p_{9}\leftarrow q_{17},q_{18}&\end{array}\right\}$ |

Observe that

$\left\lceil\frac{n}{4}\right\rceil\left\lceil\frac{\mathit{size}(c)-1}{2}\right\rceil=\left\lceil\frac{12}{4}\right\rceil\left\lceil\frac{6}{2}\right\rceil=9.$ |

The refactored program is as follows:

$\mathcal{Q}=\left\{\begin{array}[]{ll}p\leftarrow p_{1},p_{3},p_{4}&p\leftarrow p_{1},p_{2},p_{3}\\p\leftarrow p_{1},p_{2},p_{4}&p\leftarrow p_{6},p_{8},p_{9}\\p\leftarrow p_{7},p_{8},p_{9}&p\leftarrow p_{6},p_{7},p_{9}\\p\leftarrow p_{6},p_{7},p_{8}&p\leftarrow p_{5},p_{6},p_{9}\\p\leftarrow p_{5},p_{7},p_{8}&p\leftarrow p_{2},p_{4},p_{5}\\p\leftarrow p_{1},p_{3},p_{5}&p\leftarrow p_{2},p_{3},p_{4}\\p_{1}\leftarrow q_{1},q_{2}&p_{2}\leftarrow q_{3},q_{4}\\p_{3}\leftarrow q_{5},q_{6}&p_{4}\leftarrow q_{7},q_{8}\\p_{5}\leftarrow q_{9},q_{10}&p_{6}\leftarrow q_{11},q_{12}\\p_{7}\leftarrow q_{13},q_{14}&p_{8}\leftarrow q_{15},q_{16}\\p_{9}\leftarrow q_{17},q_{18}&\end{array}\right\}$ |

The refactoring by $R$ is optimal as any literal in the body of a rule in $\mathcal{P}$ is only contained in the added invented rules of $\mathcal{Q}$. Furthermore, no literal of $\mathcal{P}$ is used more than once by an invented rule.Even though the refactoring is large with respect to the input program, it only reduced the size of $\mathcal{P}$ by $9$.Observe that this program is a Balanced Incomplete Block Design (BIBD)(Beth, Jungnickel, and Lenz 1986), in particular a (9,12,4,3,1)-design.

## Appendix C MaxSAT Encoding of MaxRefactor

We describe the MaxSAT encoding of MaxRefactor. Note that it is equivalent to the COP encoding presented in Section4.2. We provide details on implementing the formulation in the Boolean domain.

##### Decision Variables

For each possible invented rule $r_{k}$ for $k\in[1,K]$ and $p\in Pr(\mathcal{P})$, we use a Boolean variable $\mathit{r}_{k,p}^{m}$ to indicate there are $m$ literals with predicate symbol $p$ in the body of $r_{k}$.The domain of $m$ is defined by the maximum number of times $p$ appears in an input rule.Moreover, the definitions of $\mathit{use}_{c,k}^{t}$, $\mathit{cover}_{c,a,k}^{t}$, $\mathit{used}_{k}$, and $\mathit{covered}_{c,a}$ are the same as in Section4.2.

##### Constraints

First, the variables $r_{k,p}^{m}$ should be mutually exclusive for different $m$. The constraint is set as follows:

$\forall k,p,m_{1},m_{2}\,(m_{1}<m_{2}),\;{r}_{k,p}^{m_{1}}\rightarrow\neg{r}_{k,p}^{m_{2}}.$ |

Second, we add the same constraint to ensure the relation between variables $\mathit{use}_{c,k}^{t}$:

$\forall c,k,t\,(t>1),\;\mathit{use}_{c,k}^{t}\rightarrow\mathit{use}_{c,k}^{t-1}.$ |

The other constraints are slightly modified due to changes in the definition of variable $r_{k,p}^{m}$.Constraint(2) is replaced as follows:

$\forall c,p,k,t,m\,(p{\rm\;not\;in\;}c),\;\neg(use_{c,k}^{t}\wedge r_{k,p}^{m}).$ |

Moreover, Constraint(3) is replaced as follows:

$\forall c,a,k,t\,(e(a,p)),\;{cover}_{c,a,k}^{t}\rightarrow\left(use_{c,k}^{t}\wedge\exists m,\;r_{k,p}^{m}\right).$ |

For Constraint(4), we use a modified pseudo-Boolean encoding:

$\forall c,p,k,t,m,\;r_{k,p}^{m}\rightarrow\left(\sum_{a\in body(c)\wedge e(a,p)}{cover}_{c,a,k}^{t}\right)\leq m.$ |

Specifically, we utilise a BDD-like encoding (Eén and Sörensson 2006) to transform the pseudo-Boolean inequation into MaxSAT constraints.

Finally, the constraints to calculate $\mathit{used}_{k}$ and $\mathit{covered}_{c,a}$ remain unchanged:

$\forall k,\;\left(\mathit{used}_{k}\leftrightarrow\exists c,t,\;\mathit{use}_{c,k}^{t}\right).$ |

$\forall c,a,\;\left(\mathit{covered}_{c,a}\leftrightarrow\exists k,t,\;\mathit{cover}_{c,a,k}^{t}\right).$ |

#### Objective

The objective is slightly modified to minimise the following function:

$\sum_{k}{used_{k}}+\sum_{k,p,m}{m\cdot r_{k,p}^{m}}+\sum_{c,k,t}{use_{c,k}^{t}}-\sum_{c,a}{covered_{c,a}}.$ |

To clarify, this objective can be transformed into unit soft constraints in MaxSAT: (i) Each $\neg used_{k}$ has a weight of 1; (ii) Each $\neg r_{k,p}^{m}$ has a weight of $m$; (iii) Each $\neg use_{c,k}^{t}$ has a weight of 1; (iv) Each ${covered}_{c,a}$ has a weight of 1.