Saturday, January 29, 2022

Music Album Covers with Lips on Them - Part 3

A composite image of 16 album covers featuring lips.
A composite image of 16 album covers featuring lips.

This is the third in the series of lips-on-album-covers series. The first two posts are lips1 and lips2.

Once we created the first two posts, we started seeing lips everywhere. There is a name for the phenomenon when something you note starts popping up everywhere. It's called the frequency illusion or the Baader-Meinhof phenomenon, or frequency bias. I guess we have "lips on the brain".

Our favorite new discoveries in these 16 albums with lips on the covers are the American composer Nico Muhly's "Mothertongue" (2008) and the Icelandic artist Gyða Valtýsdóttir's "Ox" (2021).

Albums with lips on them:

1974 Capability Brown – "Voice"
1980 Jack McDuff – "Kisses"
1988 Pretty Boy Floyd – "Bullets & Lipstik"
1997 The Flashcubes – "Bright Lights"
2002 Cherry Lips – "Breaking Up is Hard to Do"
2003 Gob – "Foot in Mouth Disease"
2004 Kelli Ali – "Hot Lips"
2008 Nico Muhly – "Mothertongue"
2012 Sare Havilcek – "Bipolar Duality EP"
2013 Dank – "Blow Me (The Remixes)"
2013 Spain – "The Morning Becomes Eclectic Session"
2015 Holychild – "The Shape of Brat Pop to Come"
2016 Islands – "Taste"
2017 N.E.R.D. – "No_One Ever Really Dies"
2017 Iadee, Ecco2k, Thaiboy Digital – "D&G"
2021 Gyða Valtýsdóttir – "Ox"


1974 Capability Brown - Voice
1980 Jack McDuff - Kisses
1988 Pretty Boy Floyd - Bullets & Lipstik
1997 The Flashcubes - Bright Lights
2002 Cherry Lips - Breaking Up is Hard to Do
2002 Gob - Foot in Mouth Disease
2004 Kelli Ali - Hot Lips
2008 Nico Muhly - Mothertongue
2012 Sare Havilcek - Bipolar Duality  EP
2013 Dank - Blow Me (The Remixes)
2013 Spain - The Morning Becomes Eclectic Session
2015 Holychild - The Shape of Brat Pop to Come
2016 Islands - Taste
2017 ladee, Ecco2k, Thaiboy Digital - D&G
2017 N.E.R.D. - No_One Ever Really Dies
2021 Gyða Valtýsdóttir - Ox

Sunday, January 16, 2022

A Walk Out the Backside of Bergamo to Osteria Scotti

Val d'Astino from Vai San Sebastiano, Bergamo.Via Agliardi in Paladina, Bergamo.
Left: Val d'Astino from Vai San Sebastiano, Bergamo. Right: Via Agliardi in Paladina, Bergamo.

Length: 14 km (8.7 miles)
Duration: 3 hours (excluding lunch)
Elevation: 500 m (1,640 ft)
Location: Italy, Lombardy, Bergamo - Paladina

As we have said many times in this blog, we walk to clear our minds. And we walk to eat. And on this beautiful day, both of the objectives are satisfied.

What we call the "backside of Bergamo" are the hills that extend out northwest from the upper city. It's a ridge that's mostly wooded and crisscrossed with walking trails. When most people enter Bergamo be in from the highway, train station, or airport, they see Bergamo and it's upper city as viewed from the south. In that sense, the backside as we call it, is quieter and less frequented by tourists. Of course, someone's backside could be someone else's frontside. It's all a matter of perspective.

Today's eating objective is the elegant Osteria Scotti, located in Paladina, northwest of Bergamo. Walking, it took us about 1.5 hours to reach it. There are many ways to get there. From the tip top of Bergamo at San Vigilio, we took San Sebastiano to Via Fontana, to Via Madonna della Castagna. From the Santuario della Beata Vergine della Castagna, you can walk on a trail or go straight to the osteria on Via Sombreno.

Roundtrip from our location in citta bassa Bergamo, the walk turned out to be about 14 km with elevation gain of about 500 m. (You can reduce the elevation gain by choosing routes that stay low.)

The way back took us about 3 hours. What happened? We ran into friends and stopped to chat and then a quick pit stop at Forno Fassi Bakery for provisions. That's the beauty of walking, the unexpected encounters and small delights along the way.

About walking, we recently came across the Out of Eden Walk project sponsored by National Geographic. The project is a decade-long experiment in slow journalism following the journalist Paul Salopek's 24,000-mile odyssey walking the pathways of the first humans who migrated out of Africa all the way to South America.

The project resonated with us in that while we would never w/could walk this far, we love walking and love experiencing the world at the speed of walking: at 3 miles an hour. We love traveling and experiencing the world walking even more so then by train, car, or even bike. We see and experience the world more intimately walking. We always feel better after a good walk. And, in this case, we had a great meal.


A grand entrance on Via Madonna della Castagna.Via Colle dei Roccoli, Bergamo.Via Fontana, Bergamo.
Left: A grand entrance on Via Madonna della Castagna.
Center: Via Colle dei Roccoli, Bergamo.
Right: Via Fontana, Bergamo.


A roadside altar on Madonna della Castagna.Façade of the Parrocchia di Sombreno.House pattern on Via San Sebastiano, Bergamo.
Left: A roadside altar on Madonna della Castagna.
Center: Façade of the Parrocchia di Sombreno.
Right: House pattern on Via San Sebastiano, Bergamo.


Via San Sebastiano, Bergamo.Espaliered beech trees - Osteria Scotti entrance.The sculpted hills of Bergamo along Via San Sebastiano with Monte Rosa in the distance.
Left: Via San Sebastiano, Bergamo.
Center: Espaliered beech trees - Osteria Scotti entrance.
Right: The sculpted hills of Bergamo along Via San Sebastiano with Monte Rosa in the distance.

Trail extending from Via Agliardi in Paladina.A donkey on Via Sotto Mura di Sant'Alessandro.Dusk from Viale delle Mura, Bergamo.
Left: Trail extending from Via Agliardi in Paladina.
Center: A donkey on Via Sotto Mura di Sant'Alessandro.
Right: Dusk from Viale delle Mura, Bergamo. The end of our day and today's walk.

Osteria Scotti - Amuse-bouche with chestnut cream.Osteria Scotti - Soffice di cavolfiore, uovo trasparente, salsa alle alici e crostini.Osteria Scotti - Zuppa di cipolle in crosta.
Left: Osteria Scotti - Amuse-bouche with chestnut cream.
Center: Osteria Scotti - Soffice di cavolfiore, uovo trasparente, salsa alle alici e crostini.
Right: Osteria Scotti - Zuppa di cipolle in crosta.

Osteria Scotti - Sella di maialino cotta a bassa temperatura, riduzione al mirto e pure di finocchio.Osteria Scotti - Tataki di ricciola, insalata di salicornia, maionese alla nocciola. Ricciola is Greater Amberjack.
Left: Osteria Scotti - Sella di maialino cotta a bassa temperatura, riduzione al mirto e pure di finocchio.
Right: Osteria Scotti - Tataki di ricciola, insalata di salicornia, maionese alla nocciola. Ricciola is Greater Amberjack.

Osteria Scotti - morbido alla farina di cocco e cioccolato fondente.Osteria Scotti - zabajone e gelato alla cannella.
Left: Osteria Scotti - morbido alla farina di cocco e cioccolato fondente.
Right: Osteria Scotti - zabajone e gelato alla cannella.

Thursday, January 13, 2022

A Natural Language Query Parser for an Information Management System Called Scrapbook



Overview


Scrapbook Platform - NL Query Engine Diagram
An architectural diagram showing NL processing in an information management system.


This is a diagram of our NL QueryEngine service. It illustrates how we process natural language queries (questions posed in plain English) that are passed into the QueryEngine API from our Scrapbook web application or bot service applications (MS Teams, Alexa). The QueryEngine service generates the appropriate actions and SQL language queries to return results requested from the user’s Scrapbook collection.

The diagram reads from left to right. At left, our API accepts an NL query object which includes the user’s request as natural language text (NL Query), an application key and other parameters. The NL query text is routed to the LUIS cognitive service app for evaluation against our ML trained language model to extract the relevant ‘intents’ and ‘entities’ that we’ve prepared our model to recognize and which we use to identify an action and to construct an appropriate SQL database query.

Depending on the inferred ‘intent’, NL Query processing is routed via a specific processing pipeline (labeled above as Intent Processors). Each NL query follows a similar process flow – Starting from the Request Router, ML language analysis, routing, request parsing, query generation, query execution, language generation (summarizing the query as interpreted and the results found), and finally back to the Request Router which assembles and returns a response. Query State allows for contextual or ‘follow-on’ queries.

Various Azure cloud services including LUIS and Cosmos DB are utilized in the processing pipeline (as seen in the upper part of the diagram.)



What are we writing about and why are we writing about it?

This post is about our work on a Natural Language (NL) parser for an Information Management Platform we developed called Scrapbook. We've covered Scrapbook in several previous posts (2017 introduction, 2019 our memex, 2021 user scenarios) and we continue in this post with a focus on how we deal with NL queries. By “NL query”, we mean a question or request posed in ordinary plain language as you would to another person. Users interact with the Scrapbook platform through any of various bot channels including MS Teams and Alexa, or via our Scrapbook web application. For example: “Tell me about walks we did in Greece last summer with Mary.”


How did we end up using NL in Scrapbook?

There were two major impetuses that drove our implementation of NL, the first of which emerged organically in our implementation of a bot service as a means to access the Scrapbook platform via chat experiences such as Alexa, MS Teams, Skype, and Messenger. These experiences all intrinsically imply some level of NL interaction, the most typical implementation of which is an interrogative or a “waterfall” style dialog. An example:

"What can I help you with?" => "I'm looking for books."
"In what date range?" => "In 2015."
"Title word?" => "science"

We felt this would be cumbersome as the Scrapbook data model allows for many dimensions, and the data itself spans a broad range of possible information domains. Forcing the user through a chain of questions to probe these dimensions is unwieldy and at best, tedious for the user. We sought a more conversational and fluid ‘natural language’ user experience: "Show me books I read in 2015 with science in the title".

In order to achieve this, we architected a query processing ‘engine’ that allows us to handle free-form user queries against the depth and breadth of Scrapbook’s collections and data models. We’ll explain later how this works.

The second major phase of NL query processing development was driven by our realization that the work we were doing with bots could be more broadly applied to Scrapbook searching in general, regardless of the app, and in particular, from our web application. This created an opportunity to completely separate the NL processing logic from the bot code within which it was originally developed, and to generalize it as a web service that can be called from any Scrapbook user experience whether via a bot channel, web app, mobile, or other.

We understood that as our data model and data itself became richer and more complex, it would be increasingly challenging and expensive to build and maintain forms-based query interfaces within the application. A further challenge was ensuring that the search experience remained intuitive and efficient to use. Forms-based query interfaces may be implemented as single or multiple search boxes with dropdowns and other standard controls to refine the search. These controls remain available in our Scrapbook web application. However, we found that the NL processing and query generation capability that we were implementing in the bot service had already begun to exceed that of our forms-based queries in the app, and was at once more powerful, faster and easier to use.

So, we added an NL query option in the Scrapbook web application that called our newly generalized NL Query Engine, now exposed as a web service. Natural Language querying became almost immediately the go-to search experience in Scrapbook and has continued to evolve in both capability and robustness.



The NL Query Engine, how does it work?

Referring back to the diagram above, the first step in the NL Query Engine processing chain is ML (Machine Learning) Language Analysis, which is called from the Request Router and leverages Microsoft’s LUIS (Language Understanding Intelligence Service) in Azure.

The LUIS service accepts a training model, which is essentially a structured document containing a large number of labeled utterances (sample NL phrases) that we anticipate receiving and that we want our application to recognize and handle intelligently. To enhance ML training and to boost recognition performance, we associate a combination of machine learned, built-in, static and dynamic features, as well as sentence patterns. Once the ML training has been completed and verified, a Scrapbook ML ‘app’ is published on LUIS as a service that we call from our NL Query Engine to analyze Scrapbook user input. We perform this training and publishing process iteratively, both to introduce new functionality as well as to improve recognition performance. The Scrapbook LUIS ML app returns a JSON object which contains the intent and a set of entity features inferred from the input phrase. Depending on the returned LUIS intent and query context, the Request Router directs program flow to the appropriate ‘Intent Processor.’

The business logic within each Intent Processor may differ, but the query processing flow is always the same, passing next to the Request Parser, then to the Query Generator, Query Execution, and finally back to the Request Router.

The Request Parser accepts a LUIS object containing the intent and entities identified in the user’s NL query, a Collection Definition object enumerating the specific categories, subcategories and synonyms associated with the Scrapbook collection being queried, and optionally, a prior Request Object that we use to facilitate contextual parsing. The Request Parser extracts and builds what we call a Request Object. The Request Object holds all of the query terms and parameters that are needed to generate the actual database query. These include category, subcategory, date-time, location, field, text string, negation, sorting, and various selection parameters.

The query, “Show me wines from France in 2018 with a rating of at least 3” for example, is routed after ML Language Analysis to the ‘Find Intent ‘processor from which the Request Parser extracts the following elements and parameters:
  • Category: ‘drink’
  • Subcategory: ‘wine’
  • Geolocation: ‘({location: france}, {type: countryRegion})’
  • DateRange: ‘({1/1/2018 12:00:00 AM}, {1/1/2019 12:00:00 AM})’
  • QueryTerm: ‘with’
  • Field: ‘rating’
  • Text: ‘at least 3’

The next step in the pipeline is the Query Generator that then transforms the parsed query elements and parameters from the Request Object into a set of SQL language query fragments held in a Query Object. Continuing with our example:
  • Category: AND ( STRINGEQUALS(c.category, "Drink", true) )
  • Subcategory: AND ( (RegexMatch(c.bodyObj["type"], "^(|.*,)(|\\s)wine(,.*|)$", "i")) )
  • GeoLocation: AND ST_WITHIN(c.geoLocation, {"type":"MultiPolygon","coordinates":[[[[…]]]] } )
  • DateRange: AND ( c.itemDate >= "2018-01-01T00:00:00" AND c.itemDate < "2019-01-01T00:00:00" )
  • SearchString: AND StringToNumber(c.bodyObj["rating"]) >= 3

We assemble the SQL fragments into one or more SQL query requests that are run against the appropriate Scrapbook collection hosted in an Azure Cosmos database. Other intent processing pipelines may perform other actions.

We generate natural language fragments along the way that ‘restate’ the query as it was interpreted by our processing, and once the query has been executed, we summarize the results returned from the database whether successful or not. “Here are the 6 items found in category Drink of type Wine within France between 2018 and 2019 with a rating >= 3.”

The Intent Processor returns control to the Request Router along with the new Request and Response objects.

Finally, the Request Router saves the Request and Response objects to state memory as context for possible follow-on queries, and returns the results to the calling application as an NL Response object.



So to recap, how do we search for information in Scrapbook?

Users have two options:

  • Natural Language requests (default) – submitted from our Scrapbook web application, a web bot, or other channels including Alexa and MS Teams – are translated as described above by our NL Query Engine into SQL queries, which are run against the active Scrapbook collection stored in Cosmos DB. Search results are presented according to the application or channel. For an Alexa Spot device for example, the results are spoken. Requests such as to switch collections, change views, select a specific result, explore relationships or to ask for help are also supported via natural language.
  • Form-based searches – as is common in many web or console applications – are translated into LINQ queries which are run against the active Scrapbook collection stored in Cosmos DB (LINQ is a programming model abstraction for querying data. The LINQ syntax is translated behind the scenes into SQL by the Cosmos DB API.)
Some example user queries and resulting SQL expressions are shown below.

Background

This section describes some of the terms and components we use in the Scrapbook Query Engine service.


What is natural language processing (NLP)?
  • NLP is what computers need to do to interpret human language, how to process natural language into a meaningful action or result.
  • For example, in Scrapbook we can ask "Show me hikes we did in 2016 near Cortina d'Ampezzo". NLP is the interpretation and processing of this sentence to return a set of results that satisfy the user’s question.
  • Equally important, Scrapbook generates a natural language response describing what it did and what it found, in language easily understood by the user whether displayed or spoken.

What is Language Understanding (LUIS)?
  • Azure LUIS is a cloud-based conversational AI service that applies custom machine-learning intelligence to a user's conversational, natural language text (called an ‘utterance’) to predict and score an overall intent, and to extract and label relevant, detailed information within the text.
  • An utterance is textual or spoken input from the user, the user’s question or query.
    • In Scrapbook for example, an utterance might be "Show me album covers from the 1980s with the keyword ‘hair’."
  • We create an ‘application’ in LUIS by defining a model. Within the model we identity the features we want to recognize. There are two principal feature categories - intent and entity.
  • An intent may be a task or action that the user wants to perform.
    • For example, in Scrapbook, our intents are "find", "drilldown", "map", "count", “related”, “collections”, “sort”, "help", "debug", "select", among others.
    • In the query "What are lunches we’ve had nearby that we’ve rated at least 4?", the intent is "find".
    • Our Scrapbook ML model currently distinguishes 18 intents.
  • Entities are specific features that we train our LUIS application recognize within the utterance, akin to the parts of a sentence.
    • In the query "Show me wines we had from Italy last year with the variety primitivo", the entities we recognize are "datetime (last year)", "geography (Italy)", "category (drink)", "subcategory (wine)", "field (variety)", and "text (primitivo)".
    • Our ML model entities include category, subcategory, text, number, dimension, ordinal, parameter, query type (with, by, …), nearby, location, geolocation, datetime, and query object (when not a category).
    • We currently recognize 19 entities and roles.
  • Our Scrapbook ML model in LUIS uses a combination of built-in entities such as DateTime, machine-learned entities such as location or text, fixed lists, and dynamic lists. Category and SubCategory are examples of dynamic-list entities that we pass into the model via the API with the utterance. We do this because each Scrapbook collection has its own category definitions. This strategy allows us to achieve excellent category entity recognition across multiple collections. For examples of different collections, see the post 2021 user scenarios.
  • We refine our LUIS ML model by adding or modifying training utterances, patterns and lists in a language understanding (.lu) format file which once uploaded to our LUIS Conversation App in Azure, is used to train a new instance of our model. Once the updated model passes acceptance testing, we publish it into production.

What is Azure Cosmos DB?
  • Cosmos DB is a managed NoSQL database service that stores documents in collections that can be queried using standard SQL query language syntax. Each "item" in a Scrapbook collection is stored as a Cosmos DB document.
  • Cosmos DB is a schema-free database, which means we structure or model the data as we need for the domain of the collection it represents (see the data model),
  • Structure Query Language (SQL) queries are how we query our collections in Cosmos DB

What is the Scrapbook data model?
  • Platform / Collection / Item
    • Scrapbook collections are maintained in distinct Cosmos DB database collections.
    • A Scrapbook item is stored as a Cosmos DB document within a collection.
    • The Scrapbook platform supports one or more collections.
    • A Scrapbook collection has one or more items. For reference, our collections have thousands of items.
  • Item / Category / Subcategory (or type) & Fields
    • Scrapbook items have mandatory id, datetime, and category fields. All other fields are optional depending on the category definition. Items are organized principally by category.
    • Each category may have one or more subcategories and synonyms to enhance collection organization, flexibility and querability. For example, Category ‘activity’ might have ‘run’ as a subcategory, and ‘jogging’ as a synonym for ‘run’.
    • Each category has an associated set of fields that further define it. The category ‘book’ for example, would have an author field, which might not be used in other categories.

User Query Examples

In this section, we show an NL query and the final SQL that is run against Cosmos DB. Other examples with visual results are shown in our post 2021 user scenarios. These queries were run against our MyJournal collection which has categories that support these queries. Geolocation related polygons are shortened with ellipsis (…) here.
 

"Show me hikes"

SELECT  c.id, c.itemDate FROM c  

WHERE c.type = "scrapbookItem 

AND  ( STRINGEQUALS(c.category, "Activity", true) )   

AND  ( ( RegexMatch(c.bodyObj["type"], "^(|.*,)(|\\s)hike(,.*|)$", "i") ) 

ORDER BY c.itemDate DESC 


 

"Tell me about trips we took last year in Lombardy, Italy"

SELECT  c.id, c.itemDate FROM c  

WHERE c.type = "scrapbookItem 

AND  ( STRINGEQUALS(c.category, "Travel", true) )   

AND  (c.itemDate >= "2020-01-01T00:00:00" AND c.itemDate < "2021-01-01T00:00:00")   

AND  ST_WITHIN(c.geoLocation, {"type":"Polygon","coordinates":[[[9.2592,44.6784], …, [9.2592,44.6784]]]})  

ORDER BY c.itemDate DESC 



"Show me wines rated greater than 3 from France"
  • followed by “What about red wines?”
  • followed by “What about from Napa Valley?”

SELECT  c.id, c.itemDate FROM c  

WHERE c.type = "scrapbookItem 

AND  ( STRINGEQUALS(c.category, "Drink", true) )   

AND  ( ( RegexMatch(c.bodyObj["type"], "^(|.*,)(|\\s)wine(,.*|)$", "i") ) )   

AND  StringToNumber(c.bodyObj["rating"]) > 3   

AND  ST_WITHIN(c.geoLocation, {"type":"Polygon","coordinates":[[[2.65916,42.34262] …, [2.65916,42.34262]]]})  

ORDER BY c.itemDate DESC 

 

SELECT  c.id, c.itemDate FROM c  

WHERE c.type = "scrapbookItem 

AND  ( STRINGEQUALS(c.category, "Drink", true) )   

AND  ( ( RegexMatch(c.bodyObj["type"], "^(|.*,)(|\\s)red(,.*|)$", "i") ) )   

AND  StringToNumber(c.bodyObj["rating"]) > 3   

AND  ST_WITHIN(c.geoLocation, {"type":"Polygon","coordinates":[[[2.65916,42.34262] …, 

[2.65916,42.34262]]]})  

ORDER BY c.itemDate DESC 

 

SELECT  c.id, c.itemDate FROM c  

WHERE c.type = "scrapbookItem 

AND  ( STRINGEQUALS(c.category, "Drink", true) )   

AND  ( ( RegexMatch(c.bodyObj["type"], "^(|.*,)(|\\s)red(,.*|)$", "i") ) )  AND  StringToNumber(c.bodyObj["rating"]) > 3   

AND  ST_WITHIN(c.geoLocation, {"type":"Polygon","coordinates":[[[-122.29542286330422,38.26322398466052],[-122.29542286330422,38.25549854951917],[-122.28230540818015,38.25549854951917],[-122.28230540818015,38.26322398466052],[-122.29542286330422,38.26322398466052]]]})  

ORDER BY c.itemDate DESC 



"Show me items within 100 meters except museums"

SELECT  c.id, c.itemDate FROM c  

WHERE c.type = "scrapbookItem"   

AND NOT  ( ( RegexMatch(c.bodyObj["type"], "^(|.*,)(|\\s)museum(,.*|)$", "i")  

AND IS_DEFINED(c.bodyObj["type"])) )   

AND  ( ST_DISTANCE(c.geoLocation, {'type': 'Point', 'coordinates':[current lon, lat]}) < 100 )  

ORDER BY c.itemDate DESC  



"How many books did I read this year"
  • followed by "Show me a list"

SELECT VALUE COUNT(1) FROM c  

WHERE c.type = "scrapbookItem 

AND  ( STRINGEQUALS(c.category, "Book", true) )   

AND  (c.itemDate >= "2022-01-01T00:00:00" AND c.itemDate < "2023-01-01T00:00:00")   

 

SELECT  c.id, c.itemDate FROM c  

WHERE c.type = "scrapbookItem 

AND  ( STRINGEQUALS(c.category, "Book", true) )   

AND  (c.itemDate >= "2022-01-01T00:00:00" AND c.itemDate < "2023-01-01T00:00:00")  

ORDER BY c.itemDate ASC  



"Show me hikes last year except with Roberto"
  • followed by "Show me a map"

SELECT  c.id, c.itemDate FROM c  

WHERE c.type = "scrapbookItem 

AND  ( STRINGEQUALS(c.category, "Activity", true) )   

AND  ( ( RegexMatch(c.bodyObj["type"], "^(|.*,)(|\\s)hike(,.*|)$", "i") ) )   

AND  (c.itemDate >= "2021-01-01T00:00:00" AND c.itemDate < "2022-01-01T00:00:00")   

AND NOT (CONTAINS(c.bodyObj["who"], "roberto", true) AND IS_DEFINED(c.bodyObj["who"]))  )  

ORDER BY c.itemDate DESC  


“Show me a map” results in a ‘map’ intent which we interpret as a command to display a map of results.


"Find me books of type reference"

Here's the SQL from the NL query:

SELECT  c.id, c.itemDate FROM c  

WHERE c.type = "scrapbookItem 

AND  ( STRINGEQUALS(c.category, "Book", true) )   

AND  ( ( RegexMatch(c.bodyObj["type"], "^(|.*,)(|\\s)reference(,.*|)$", "i") ) 

ORDER BY c.itemDate DESC  


In the Scrapbook web application, we can also search via form-based web controls which generate a LINQ query that in turn translates into the following similar SQL:

SELECT VALUE root FROM root  

WHERE (((true AND CONTAINS(LOWER(root["bodyObj"]["type"]), "reference"))  

AND (LOWER(root["category"]) = "book"))  

AND (root["type"] = "scrapbookItem"))  

ORDER BY root["itemDate"] DESC