Stages of Open Source Intelligence
In order to provide useful intelligence to the target, open source intelligence
follows a precise and precise approach. The CIA's "Intelligence Cycle" and
"Intelligence Studies" describe this process in slightly different ways, but both
have common collection, processing, analysis, production, and dissemination, with
the former adding planning and direction as additional steps, the latter adds
classification.
1. Collect data
Data is a critical
asset in conducting any intelligence activity. Like any other intelligence method,
open source intelligence relies heavily on data, which is extracted from publicly
available sources. At this stage, the data source and the type of data to be
collected must be identified. The data source can be a keyword, and the data type
can be in the form of text, picture, video, etc. Using web intelligence monitoring
systems such as Knowlesys
Intelligence System, you can collect information from specified data sources
(such as social media, search engines, online databases, dark web) through keywords.
2. Data processing
The processing step mainly
involves validating and removing noise from the raw data received during the data
collection phase to make it available for analysis. Filtering irrelevant data,
translating text from one language to another, converting photo, audio and video
files into useful data, and more are tasks performed during the processing stage.
The large amount of data obtained from open sources makes it difficult to interpret
and extract useful insights, requiring increased processing power, such as cloud
storage and big data computing power.
3. Utilization of
data
Also known as the analysis phase, the exploit is
responsible for determining whether the material processed in the previous phase is
what it claims to be, and its value to the intelligence community. The development
phase consists of three steps such as authentication, credibility assessment, and
contextualization. Verifying the authenticity and credibility of information is
critical to developing trusted knowledge. Contextualization requires assembling
several pieces of open-source information from any source into an output that
provides a comprehensive understanding of a topic. The most common analysis methods
are lexical analysis, semantic analysis, geospatial analysis, and social media
analysis.
a. Lexical Analysis
Lexical analysis is a program that
collects and analyzes large amounts of text from the Internet. Identifying
frequently searched phrases on Google is a direct application of lexical analysis.
b. Semantic Analysis
Semantics is a subset of linguistics that
studies the meaning of language. In the context of natural language processing,
semantic analysis evaluates and reflects human language, analyzing words written in
English and other natural languages with human-like interpretations.
c.
Geospatial analysis
In environmental studies, geospatial analysis refers to
the use of geographic data to discover important information about the environment
that is referenced both geographically and temporally. The basic functions of
geospatial analysis include the identification of environmental threats, the
diffusion tracking of pollutants over time, the model research of environmental
factors such as ocean temperature and acidification, and the correlation between
different environmental characteristics and locations. Geospatial analysis refers in
particular to data transformations that depend on geography. Geographic information
systems, remote sensing, GPS, metadata, remote sensing, and georeferencing are some
of the techniques used in this type of analysis. Geospatial analysis techniques are
widely used in fields such as weather-related risks, urban planning and development,
covert operations, and natural resource development. Data for geospatial analysis
comes from a variety of sources, such as images uploaded to social media, mobile
device data, and detailed GPS, the information stereotaxic sensors use to build
meaningful intelligence.
d. Social media analysis
The practice of
gathering the most basic information from people's social media platforms and
drawing practical conclusions is called social media analysis. The information being
analyzed comes from people's previous posts, conversations with their followers,
early social media initiatives, and more. The purpose of social media analysis is to
obtain valuable information on individual attitudes and preferences. Most users use
social media to express their emotions such as happiness, anger, agreement,
disagreement, and annoyance through text messages or posts. When individuals mention
or talk about a business or product on social media, sentiment analysis methods can
be used to determine the mood or emotion behind the phrases they use. The term used
by individuals to express themselves about a scene, event, product, brand, company,
or other subject A detailed analysis will provide public opinion on the subject
under consideration. Organizations can use social media analytics to discover
commonalities in consumer preferences and complaints, as well as talking about a
person, business or event online, if they have the right tools.
4.
Knowledge production-extraction
The final stage of open source
intelligence is delivering meaningful intelligence reports to consumers. Since the
report will be comprehensive and high priority, it can be shared directly with the
judiciary, law enforcement agencies and other interested parties. Classification
levels for open source intelligence products are also specified during the
production phase. The details of collecting, analyzing and utilizing data may
require a higher level of disaggregation. Allocation is an important part of the
production phase. The most common way to share open source analysis is through
formal reports. On the other hand, a product can be in the form of a verbal
description or a visual representation. Systems such as Knowlesys Intelligence
System provide data collection and graphing functions to make reviewing data easier.
Steps in open source intelligence analysis
According to the
process of obtaining information and transforming it into intelligence, OSINT
analysis can be divided into three steps:
Access to open source
information
Information from public or open sources, where
sources include but are not limited to the Internet, public publications, television
broadcasts, publicity announcements, etc. Public or open information refers to
information that is voluntarily or voluntarily released by the owner of the
information.
Information analysis
It refers to the
classification and screening of the obtained open source information according to
certain themes and certain principles to form an information collection, or it is
called an intelligence information database. The information here is still the
original open source information, but it has been merged with similar items, which
has a good reference value and certain intelligence functions. In fact, the
overwhelming flood of unprocessed information can interfere with decision-making or
overwhelm people. A large amount of complicated information is scattered in every
corner of the network, which requires us to dig, collect, accumulate and organize.
Formation of value intelligence
It is the result
obtained through comprehensive analysis and research based on the needs of specific
objects and based on open source information. For units or individuals engaged in
open source intelligence work, because of the diversity and uncertainty of specific
objects, there may be clear or uncertain intelligence needs. Information collection
and research are highly targeted to specific intelligence needs. In the face of
uncertain intelligence needs, it is necessary to study the types of specific objects
and their potential needs, and then carry out targeted information collection and
research to form high-value intelligence required by relevant specific objects.