The integration of learning and reasoning is a key challenge in artificial intelligence and machine learning. Neuro-Symbolic AI (NeSy) addresses this by combining symbolic reasoning with neural networks. This paper draws parallels between NeSy and Statistical Relational AI (StarAI), which also integrates learning and reasoning. The paper identifies seven dimensions common to both fields, used to categorize approaches. These dimensions include logic inference (model-based vs. proof-based), logic syntax (propositional, relational, first-order), logic semantics (fuzzy, probabilistic), learning (parameter and structure learning), symbols vs. sub-symbols, degree of integration, and tasks (distant supervision, collective classification, knowledge graph completion). NeSy systems often use logic as a regularization loss or as a neural network architecture template. Most NeSy systems relax discrete logic semantics for gradient-based learning, often using fuzzy or probabilistic logic. Systems that retain pure logic reasoning are computationally intractable. The paper proposes a "NeSy recipe" that unifies systems under a common framework, with two fundamental choices: constructing a purely logical architecture and compiling it into a computational graph. The logical architecture is represented as an AND/OR tree, which is then transformed into a continuous, differentiable structure using algebraic methods. This results in a hybrid deep computational graph. The paper highlights the importance of these dimensions in understanding and developing NeSy systems.The integration of learning and reasoning is a key challenge in artificial intelligence and machine learning. Neuro-Symbolic AI (NeSy) addresses this by combining symbolic reasoning with neural networks. This paper draws parallels between NeSy and Statistical Relational AI (StarAI), which also integrates learning and reasoning. The paper identifies seven dimensions common to both fields, used to categorize approaches. These dimensions include logic inference (model-based vs. proof-based), logic syntax (propositional, relational, first-order), logic semantics (fuzzy, probabilistic), learning (parameter and structure learning), symbols vs. sub-symbols, degree of integration, and tasks (distant supervision, collective classification, knowledge graph completion). NeSy systems often use logic as a regularization loss or as a neural network architecture template. Most NeSy systems relax discrete logic semantics for gradient-based learning, often using fuzzy or probabilistic logic. Systems that retain pure logic reasoning are computationally intractable. The paper proposes a "NeSy recipe" that unifies systems under a common framework, with two fundamental choices: constructing a purely logical architecture and compiling it into a computational graph. The logical architecture is represented as an AND/OR tree, which is then transformed into a continuous, differentiable structure using algebraic methods. This results in a hybrid deep computational graph. The paper highlights the importance of these dimensions in understanding and developing NeSy systems.