Complex Searching

Searching for data in our systems is sometimes hard. If you’ve ever needed to generically search a lot of text, it’s not easy to write code to do this in an efficient manner. It’s even harder if you try to embed this in an application. If you get into full-text searching, likely you would look beyond SQL Server as the built in full-text indexing and searching isn’t great.

What if you had to search in more complex data? Imagine that your users were looking for words in audio files, or they wanted to find images that matched other images. Those are even more complex searches, and I expect that few people have had to deal with this. I know some, and I do expect it to be something customers will do demand across time.

I saw an article about Google Lens coming to the desktop, allowing you to clip an image and then search for similar items. While I haven’t used this for searching, my kids have. Often they are looking for matches of a product or a place, and they like being able to search for related images. Image search is interesting, as you are looking to match patterns of numbers, but not exact matches. The idea of fuzzy matching parts of an image, which is a series of numeric values, but not all of the values, has to be a very challenging process. I would find this even more complex than doing a speech-to-text translation from audio and searching for words. Many of us would know how to do produce a solution for searching audio recordings.

The theory of how to search for images based on another image is a complex field of study. I don’t pretend to know how this might work, and I’m not sure I care. If I have a need for this, I wouldn’t expect to need to understand how the system works as this would really be an API call of some sort. I might need to work with the API and understand how to better use it to produce the results my clients need, but that might be the most I’d care about.

As with a lot of deeply technical things, I care more about practical applications. On a recent trip, I watched my wife use Google Lens to translate some menus, something I hadn’t considered. I usually would type in text and look for meanings, but using the image is much easier. I expect we’ll see the need for more complex searches in the future, but for most of us, it will be another service we consume without really understanding how it works, which is probably fine in these situations.

Steve Jones

Listen to the podcast at Libsyn, Stitcher, Spotify, or iTunes.

About way0utwest

Editor, SQLServerCentral
This entry was posted in Editorial and tagged . Bookmark the permalink.