The Deep SEARCH 9 Integrated Development Environment enables data scientists and computer linguists to build Managed Intelligence Applications.
Managed Intelligence is connecting users with the information and insights they need for optimal performance in their business context.
The IDE is part of the DS9 Developer's Edition, which enables your organization to develop and deploy DS9 solutions adding high-value to business imperatives such as:
improving customer engagement and lifetime value
creating successful marketing strategies
developing safer, more innovative products
and managing field operations, just to name a few
The graphical browser based development environment enables team oriented rapid development of Managed Intelligence Applications for end users.
These applications can deliver information, analytics and insights to users who might otherwise struggle to get the information they need.
The Integrated Development Environment was built to draft Managed Intelligence applications quickly, refining them to a full 360° view on that intelligence.
As the IDE is browser based, it allows for team work and multiple developers can work on the same project at the same time, contributing to its completeness and performance.
For example, it could reference research activities of universities by relevant technologies to allow a 360° view of research projects or to find scientists with specific skill sets.
A SEARCHCORPUS® could also interrelate side effects of certain medication to active ingredients of that drug and
by dereferencing chemical compounds allowing a 360° view of potential side effects of certain chemical compounds if being used in drugs.
In the world of competitive intelligence or Mergers & Acquisitions, a Deep SEARCH 9 application may collect and classify information about competitors
and M&A targets from available sources like patent databases, financial statements, public registers, corporate websites and news feeds to build a 360°
view of companies that require a close watch.
Graphical Browser Based IDE
The ease of use of the graphical development environment and the agile way in which information scientists, computer linguists and other developers can
collaborate when building Deep SEARCH 9 applications allow for extremely short turn-around times for newly developed applications and new releases of
extended or modified applications.
Filter Chain Editor
The filter chain editor is the central tool with which developers draw filter chains using drag and drop operations like with other drawing tools.
By arranging the filters in a filter chain and executing the filter chain in a job,
the filters dynamically add information to the application’s SEARCHCORPUS® index that combines information from different
sources to provide a 360° view on that information, building intelligence.
Managing Intelligence: Building a DS9 Solution
Data scientists, computer linguists and line of business research engineers
decide which explorative or analytic mechanisms constitute the DS9 solution for a specific area.
Following best practice patterns and customizing templates from the Filter Template Gallery, developers build problem specific filter chains using the graphical filter chain editor.
The basic building blocks of a DS9 solution are projects, filter chains, filters,
containers and jobs.
Filters are responsible for the execution of specific explorative or analytic tasks and containers define how processed data is stored or indexed.
Filters and Containers are connected by Connections to build a filter chain.
The most simple filter chain is built of an input container, a filter and an output container that are connected.
Impressions and Screen Shots of the IDE
Containers and filters are defined in dialogs that pop up, when dragging a filter or container symbol on the drawing pane
in the graphical filter chain editor.
Containers can connect to one of three entirely different backends:
JDBC, the fastest backend, plain storage
Elasticsearch, allowing complex fulltext search in the container records and the attached SEARCHCORPUS®
RDF, data is stored in a triple store. This is the semantic web backend used for Ontology Management.
There are a number of different Container Types with predefined data models that maybe tied to a specific backend. Containers of type Record Sets
are the most flexible containers as they can have an arbitrary number of datafields.
The graphical design of filter chains is so simple that researchers can use the DS9 system without programming
or other specialist skills.
There are more than 60 different filter types that can be used to build filter chains.
For each of these filter types there is a working example in the Filter Type Gallery available with the DS9 Developer's Edition and the
Filter Dialog explains the usage of each filter type.
Simple drag and drop operations configure filters, containers and connections.
Connections define the sequence in which filters process data and propagate it through the filter chain.
Each DS9 solution is composed of one or more filter chains that build on each other and are
executed in sequence to explore documents, analyze content or data and have it processed by each
of the configured filters to generate the information with the content and in the format that is required.
To enter data manually, like seed URLs for an URL Crawler, the Container Manager is opened by clicking on the container in the tree.
Here data can be either added manually or data can be imported by dragging a file with a compatible data structure onto the dialog.
Each filter chain is assigned one or more jobs that are monitored by the Parallel Processing Unit and executed automatically
at the scheduled time. The Parallel Execution Unit monitors running tasks, resource consumption and resource access to manage parallel execution of filters and
avoid resource overloading and dead lock situations.
The Parallel Processing Engine makes sure, that all scheduled jobs are executed and that resources shared by filter chains are made available as
they are needed. Multiple jobs can be run in parallel.
Job Instance Dialog
For each job that was executed a Job Instance is created that preserves all data it accessed at the time of execution.
The filter chain that was processed and the data in all containers is copied to the job instance.
A log file of the job execution is saved with the job instance and gives information about each step, what amount of data
was processed and even remote servers that were not available. During development of the filter chains this is a great help for the developers,
especially in case of problems. Then developers can switch the job log file to debug or trace mode to
get more detailed information as the job is being executed.