Details

    • Type: New Feature
    • Status: Done
    • Resolution: Done
    • Affects Version/s: TERMS_REFACTOR_BRANCH
    • Fix Version/s: None
    • Component/s: Bigdata SAIL

      Description

      I've been thinking about this some more and talked some with Martyn and about it. It seems that there are a few interesting directions in which we could take this. All of this looks pretty easy to me.


      - Provide SPARQL query evaluation from prolog. You would get back result sets (backtrack to materialize the solutions) or graphs (backchain to materialize the statements). However, note joins in this approach would happen in prolog rather than using the bigdata query engine.


      - Provide triple/quad pattern "predicates" so you can use backtracking to visit everything which would be materialized by the access path for that triple / quad pattern. This is a pretty low level integration, but it would be very useful for building up interesting inference patterns.


      - Possibly expose query optimizer rewrite for a clause made up of triple/quad patterns. You pass in the original clause (or list) and you get back the rewrite of the clause (or list) in which the triple/quad patterns have been reordered by the query optimizer.


      - Provide a translation from a clause (or list) made of up triple/quad patterns into a prolog representation of a query plan. This will be important to get high performance since executing the query plan will let you use the native joins and those are much more scalable than backtracking over prolog predicates.


      - Provide a means to embed prolog within a SPARQL query so you can do things like the transitive closure of some part of the type or property hierarchy within the query. I do not have any syntax to propose for this right now, but it strikes me as an interesting direction in which we could push. tuprolog is light weight and 100% java. You can easily instantiate a prolog interpreter within the context of evaluating a query. For example, we could scope the interpreter to the chunked evaluation of an operator. When we fire off a bop (bigdata operator), it sees a chunk of intermediate solutions coming in. It can start an interpreter whose life span is scoped to that chunk of intermediate solutions, performing any interesting reasoning you like and outputing one or more chunks of solutions which will flow downstream to the next operator in the query plan.

      I should also note in passing that Martyn will often use jython as an interpreter to access Java. tuprolog is written in java, as is bigdata. You should be able to more or less freely mix java, prolog, SPARQL, and python in the right environment.

        Activity

        Hide
        bryanthompson bryanthompson added a comment -

        I've removed the tuprolog dependency in preparation for the 1.1.x release. However, this remains an interesting option. One approach would be to declare a local SERVICE which was a prolog reasoner and provide a bridge between the bigdata IVs and the prolog reasoner through an RDF serialization of the solutions flowing through a query.

        Show
        bryanthompson bryanthompson added a comment - I've removed the tuprolog dependency in preparation for the 1.1.x release. However, this remains an interesting option. One approach would be to declare a local SERVICE which was a prolog reasoner and provide a bridge between the bigdata IVs and the prolog reasoner through an RDF serialization of the solutions flowing through a query.

          People

          • Assignee:
            mikepersonick mikepersonick
            Reporter:
            bryanthompson bryanthompson
          • Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved: