[go: up one dir, main page]

CN106570136B - Remote sensing image semantic retrieval method and device based on pixel level association rule - Google Patents

Remote sensing image semantic retrieval method and device based on pixel level association rule Download PDF

Info

Publication number
CN106570136B
CN106570136B CN201610958131.XA CN201610958131A CN106570136B CN 106570136 B CN106570136 B CN 106570136B CN 201610958131 A CN201610958131 A CN 201610958131A CN 106570136 B CN106570136 B CN 106570136B
Authority
CN
China
Prior art keywords
image
semantic
transaction set
pixel
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610958131.XA
Other languages
Chinese (zh)
Other versions
CN106570136A (en
Inventor
刘军
陈劲松
陈凯
郭善昕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN201610958131.XA priority Critical patent/CN106570136B/en
Publication of CN106570136A publication Critical patent/CN106570136A/en
Application granted granted Critical
Publication of CN106570136B publication Critical patent/CN106570136B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/5866Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Library & Information Science (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of remote sensing image retrieval, in particular to a semantic retrieval method and a semantic retrieval device for remote sensing images based on pixel level association rules. The retrieval method comprises the following steps: constructing a transaction set of training samples; extracting association rules of the training samples from the transaction sets of the training samples, and establishing a training model according to the association rules of the training samples; and calculating the association rule of each image in the image library, inputting the association rule of each image into a training model to obtain the semantic vector of each image, and performing image retrieval through the image to be retrieved and the semantic vector of each image in the image library. The method comprises the steps of extracting association rules of the remote sensing images from the transaction set, establishing a training model between the association rules and the image categories, obtaining semantic vectors of each image through the training model, and realizing the retrieval of the remote sensing images through semantic vectors of the images to be retrieved and all images in the image library, so that the accuracy of the remote sensing image retrieval is improved.

Description

一种基于像素级关联规则的遥感图像语义检索方法及装置A remote sensing image semantic retrieval method and device based on pixel-level association rules

技术领域technical field

本发明涉及遥感影像检索技术领域,特别涉及一种基于像素级关联规则的遥感图像语义检索方法及装置。The invention relates to the technical field of remote sensing image retrieval, in particular to a remote sensing image semantic retrieval method and device based on pixel-level association rules.

背景技术Background technique

遥感影像具有影像幅面大,影像内容多且复杂的特点,“同物异谱”和“异物同谱”的现象很普遍,给遥感影像的检索带来较大的难度。影像检索即搜索数据库中含有指定特征或具有相似内容的影像,当前主流的基于内容的影像检索(Content-Based ImageRetrieval,CBIR)方法能综合影像处理、信息检索、机器学习、计算机视觉、人工智能等诸多领域的知识,借助从影像中自动提取的视觉特征作为影像内容的描述;目前,基于内容的影像检索取得了大量的研究成果。Remote sensing images have the characteristics of large image format and many and complex image contents. The phenomenon of "same object with different spectrum" and "different object with same spectrum" is very common, which brings great difficulty to the retrieval of remote sensing images. Image retrieval is to search for images containing specified features or similar content in the database. The current mainstream content-based image retrieval (Content-Based Image Retrieval, CBIR) method can integrate image processing, information retrieval, machine learning, computer vision, artificial intelligence, etc. The knowledge in many fields uses the visual features automatically extracted from the image as the description of the image content; at present, a large number of research results have been obtained in content-based image retrieval.

视觉特征提取在影像检索中具有重要作用,可以分为两个研究方向,一是研究影像的光谱、纹理、形状等低层视觉特征的提取及相似度度量,包括基于光谱曲线吸收特征提取的高光谱影像检索、利用颜色空间、颜色矩提取颜色特征、利用小波变换、Contourlet变换、Gabor小波、广义高斯模型、纹理谱等方法描述影像的纹理特征、基于像元形状指数、PHOG(Pyramid Histogram of Oriented Gradients,分层梯度方向直方图)形状与小波金字塔的遥感影像形状特征描述方法等。这类低层视觉特征的应用比较成熟,但是无法描述描述影像的语义信息,其提供的检索结果往往和人脑对遥感影像的认知有相当的差距,并不能完全令人满意。Visual feature extraction plays an important role in image retrieval. It can be divided into two research directions. One is to study the extraction and similarity measurement of low-level visual features such as spectrum, texture, and shape of images, including hyperspectral feature extraction based on spectral curve absorption features. Image retrieval, using color space and color moment to extract color features, using wavelet transform, Contourlet transform, Gabor wavelet, generalized Gaussian model, texture spectrum and other methods to describe image texture features, based on pixel shape index, PHOG (Pyramid Histogram of Oriented Gradients , hierarchical gradient direction histogram) shape and wavelet pyramid shape feature description method of remote sensing image, etc. The application of such low-level visual features is relatively mature, but it cannot describe the semantic information of the image, and the retrieval results it provides often have a considerable gap with the human brain's cognition of remote sensing images, which is not completely satisfactory.

针对这一问题,另一个研究方向即是建立低层视觉特征与语义的映射模型,在语义层次提高影像检索的准确率。主要研究成果包括基于统计学习的语义检索方法,如贝叶斯分类器模型上下文语境的贝叶斯网络、贝叶斯网络与EM(最大期望)参数估计等;基于语义标注的检索方法,如语言索引模型、概念语义分布模型等;基于GIS(地理信息系统,Geographic Information System)辅助的语义检索方法,如利用GIS数据中矢量要素的空间和属性信息引导语义赋予的方法;基于本体论的语义检索方法,如基于视觉对象领域本体的方法、GeoIRIS等。这类方法能够在一定程度上反映人脑对于影像检索的语义理解过程,具有较高的准确率,是未来影像检索的发展趋势。然而目前的语义检索方法往往过于关注低层视觉特征与语义映射模型的构建过程,忽略了所采用的低层视觉特征的种类、语义学习方法等因素,最终影响到语义检索的查准率。In response to this problem, another research direction is to establish a low-level visual feature-semantic mapping model to improve the accuracy of image retrieval at the semantic level. The main research achievements include semantic retrieval methods based on statistical learning, such as Bayesian network of Bayesian classifier model context, Bayesian network and EM (maximum expectation) parameter estimation, etc.; retrieval methods based on semantic annotation, such as Language index model, concept semantic distribution model, etc.; semantic retrieval methods based on GIS (Geographic Information System, Geographic Information System), such as the method of using the space and attribute information of vector elements in GIS data to guide semantic assignment; ontology-based semantic Retrieval methods, such as methods based on visual object domain ontology, GeoIRIS, etc. This kind of method can reflect the semantic understanding process of the human brain for image retrieval to a certain extent, has a high accuracy rate, and is the development trend of image retrieval in the future. However, current semantic retrieval methods often pay too much attention to the construction process of low-level visual features and semantic mapping models, ignoring factors such as the types of low-level visual features and semantic learning methods used, which ultimately affect the precision of semantic retrieval.

近年来,人类视觉感知特性被引入到影像检索领域中,受到广泛的关注,但是这类方法尚处于起步阶段,还有许多问题有待解决:如人眼视觉系统的生理过程、更符合人眼视觉的特征描述方法、自底向上的感知模型、显著特征提取与度量、自顶向下的视觉注意机制等等。另外,针对遥感影像数据检索的典型成果主要包括瑞士RSIAII+III项目,研究基于光谱和纹理特征的多分辨率遥感影像数据的描述和检索;Berkeley数字图书馆项目开发的原型系统Blobworld,它以航空影像、USGS正射影像和地形图,SPOT卫星影像等作为数据源,让用户能够直观地改进检索结果;新加坡南洋理工大学的(RS)2I项目,其研究内容涵盖了遥感影像特征提取与描述、多维索引技术及分布式体系结构设计的众多方面;斯坦福大学的SIMPLIcity,利用一种稳健的综合区域匹配方法(Integrated Region Matching,IRM)来定义影像间的相似度,在卫星基于关联规则的遥感影像检索中取得不错的结果;微软亚洲研究院的iFind,系统通过影像的标注信息构造语义网络,并在相关反馈中与影像的视觉特征相结合,有效地实现了在两个层次上的相关反馈。这些系统取得了重要成果,但是不论是在特征提取还是在代表性特征选择方面仍需要进一步深入研究。In recent years, the characteristics of human visual perception have been introduced into the field of image retrieval and have received extensive attention. However, this type of method is still in its infancy, and there are still many problems to be solved: such as the physiological process of the human visual system, and the Feature description method, bottom-up perceptual model, salient feature extraction and measurement, top-down visual attention mechanism, etc. In addition, the typical results for remote sensing image data retrieval mainly include the Swiss RSIAII+III project, which studies the description and retrieval of multi-resolution remote sensing image data based on spectral and texture features; the prototype system Blobworld developed by the Berkeley digital library project, which uses aviation Images, USGS orthophotos, topographic maps, and SPOT satellite images are used as data sources to allow users to intuitively improve retrieval results; the (RS) 2I project of Nanyang Technological University in Singapore covers remote sensing image feature extraction and description, Many aspects of multidimensional indexing technology and distributed architecture design; Stanford University's SIMPLIcity uses a robust integrated region matching method (Integrated Region Matching, IRM) to define the similarity between images, in satellite remote sensing images based on association rules Good results have been obtained in the retrieval; iFind of Microsoft Research Asia constructs a semantic network through the annotation information of the image, and combines it with the visual features of the image in the relevant feedback, effectively realizing relevant feedback at two levels. These systems have achieved important results, but both in feature extraction and representative feature selection still need further research.

综上所述,不管是基于像素还是面向对象的影像检索方法,大多都关注于影像整体或局部或对象区域的颜色、纹理、形状等低层特征的统计信息。直接基于低层特征的检索方法无法提取感兴趣的目标,缺乏对影像空间信息进行描述的能力,存在特征维数过高、描述不完整、准确性差、缺乏规律性、特征描述与人类认知存在语义差距等缺点。与此同时,基于高层语义信息的遥感影像检索又缺乏成熟的理论和方法。低层特征与高层语义信息之间的“语义鸿沟”,阻碍了遥感影像检索的发展及应用。To sum up, whether it is pixel-based or object-oriented image retrieval methods, most of them focus on the statistical information of low-level features such as color, texture, and shape of the image as a whole or in part or in the object area. The retrieval method directly based on low-level features cannot extract the target of interest, lacks the ability to describe the spatial information of the image, has too high feature dimension, incomplete description, poor accuracy, lack of regularity, feature description and human cognition exist semantics Disadvantages such as gaps. At the same time, remote sensing image retrieval based on high-level semantic information lacks mature theories and methods. The "semantic gap" between low-level features and high-level semantic information hinders the development and application of remote sensing image retrieval.

发明内容Contents of the invention

本发明提供了一种基于像素级关联规则的遥感图像语义检索方法及装置,旨在至少在一定程度上解决现有技术中的上述技术问题之一。The present invention provides a remote sensing image semantic retrieval method and device based on pixel-level association rules, aiming to solve one of the above-mentioned technical problems in the prior art at least to a certain extent.

为了解决上述问题,本发明提供了如下技术方案:In order to solve the above problems, the present invention provides the following technical solutions:

一种基于像素级关联规则的遥感图像语义检索方法,包括以下步骤:A remote sensing image semantic retrieval method based on pixel-level association rules, comprising the following steps:

步骤a:构建训练样本的事务集;Step a: Construct a transaction set of training samples;

步骤b:从所述训练样本的事务集中提取训练样本的关联规则,并根据所述训练样本的关联规则建立训练模型;Step b: extracting association rules of training samples from the transaction set of training samples, and establishing a training model according to the association rules of training samples;

步骤c:计算影像库中每一幅影像的关联规则,将所述每一幅影像的关联规则输入训练模型,得到每一幅影像的语义向量,通过比较待检索影像与影像库中每一幅影像的语义向量的相似性进行影像检索。Step c: Calculate the association rules of each image in the image database, input the association rules of each image into the training model, obtain the semantic vector of each image, compare the image to be retrieved with each image in the image database Image retrieval based on the similarity of semantic vectors of images.

本发明实施例采取的技术方案还包括:所述步骤a还包括:根据指定的影像类别,从每个影像类别中选定一定数量的遥感影像作为训练样本,并对所述训练样本进行像素灰度级压缩。The technical solution adopted by the embodiment of the present invention also includes: the step a further includes: selecting a certain number of remote sensing images from each image category as training samples according to the specified image category, and performing pixel graying on the training samples Degree-level compression.

本发明实施例采取的技术方案还包括:在所述步骤a中,所述构建训练样本的事务集具体为:针对每个影像类别,分别构建训练样本的事务集;所述构建方法包括:The technical solution adopted by the embodiment of the present invention also includes: in the step a, the construction of the transaction set of the training sample is specifically: for each image category, respectively constructing the transaction set of the training sample; the construction method includes:

灰度值事务集构建方法:使用每个像素的各个波段的灰度值进行事务集构建;Gray value transaction set construction method: use the gray value of each band of each pixel for transaction set construction;

边缘四方向事务集构建方法:利用canny算子提取影像的边缘,然后提取边缘点像素的多个方向,以邻域为单位,以该邻域内每个方向上的所有像素灰度值的排列构成事务集中的一个事务,则每个边缘点构成4个事务;Edge four-direction transaction set construction method: use the canny operator to extract the edge of the image, and then extract the multiple directions of the edge point pixels, with the neighborhood as the unit, and the arrangement of the gray value of all pixels in each direction in the neighborhood One transaction in the transaction set, each edge point constitutes 4 transactions;

像素四方向事务集构建方法:使用每个像素值四个方向上的灰度值序列构建事务集。Pixel four-direction transaction set construction method: use the sequence of gray values in the four directions of each pixel value to construct a transaction set.

本发明实施例采取的技术方案还包括:在所述步骤c中,所述通过比较待检索影像与影像库中每一幅影像的语义向量的相似性进行影像检索的检索方式为:分别计算待检索影像和影像库中的所有影像的关联规则,将计算结果输入所述训练模型中,通过所述训练模型输出所有影像的语义向量;分别计算待检索影像与影像库中所有影像的语义向量之间的距离,根据语义向量之间的距离计算待检索影像与影像库中所有影像的相似度,并根据相似度返回检索结果。The technical solution adopted by the embodiment of the present invention also includes: in the step c, the retrieval method of performing image retrieval by comparing the similarity between the image to be retrieved and the semantic vector of each image in the image database is: respectively calculating the Retrieve the image and the association rules of all images in the image library, input the calculation results into the training model, and output the semantic vectors of all images through the training model; respectively calculate the relationship between the image to be retrieved and the semantic vectors of all images in the image library According to the distance between the semantic vectors, the similarity between the image to be retrieved and all the images in the image database is calculated, and the retrieval result is returned according to the similarity.

本发明实施例采取的技术方案还包括:在所述步骤c中,所述语义向量之间的距离越接近,两幅影像的相似度越高,所述相似度计算公式为:The technical solution adopted by the embodiment of the present invention also includes: in the step c, the closer the distance between the semantic vectors is, the higher the similarity between the two images is, and the similarity calculation formula is:

在上述公式中,v1和v2分别为两个语义向量,N是向量的长度。In the above formula, v1 and v2 are two semantic vectors respectively, and N is the length of the vector.

本发明实施例采取的另一技术方案为:一种基于像素级关联规则的遥感图像语义检索装置,包括:Another technical solution adopted by the embodiment of the present invention is: a remote sensing image semantic retrieval device based on pixel-level association rules, including:

事务集构建模块:用于构建训练样本的事务集;Transaction set building block: transaction set used to construct training samples;

关联规则提取模块:用于从所述训练样本的事务集中提取训练样本的关联规则;Association rule extraction module: used to extract the association rules of the training samples from the transaction set of the training samples;

模型训练模块:用于根据所述训练样本的关联规则建立训练模型;Model training module: used to establish a training model according to the association rules of the training samples;

向量计算模块:用于计算影像库中每一幅影像的关联规则,将所述每一幅影像的关联规则输入训练模型,得到每一幅影像的语义向量;Vector calculation module: used to calculate the association rules of each image in the image database, and input the association rules of each image into the training model to obtain the semantic vector of each image;

影像检索模块:用于通过比较待检索影像与影像库中每一幅影像的语义向量的相似性进行影像检索。Image retrieval module: used for image retrieval by comparing the similarity between the image to be retrieved and the semantic vector of each image in the image library.

本发明实施例采取的技术方案还包括样本选择模块和灰度级压缩模块:所述样本选择模块用于根据指定的影像类别,从每个影像类别中选定一定数量的遥感影像作为训练样本;所述灰度级压缩模块用于对所述训练样本进行像素灰度级压缩。The technical solution adopted by the embodiment of the present invention also includes a sample selection module and a grayscale compression module: the sample selection module is used to select a certain number of remote sensing images from each image category as training samples according to the specified image category; The grayscale compression module is used for performing pixel grayscale compression on the training samples.

本发明实施例采取的技术方案还包括:所述事务集构建模块构建训练样本的事务集具体为:针对每个影像类别,分别构建训练样本的事务集;构建方法包括:The technical solution adopted by the embodiment of the present invention also includes: the transaction set construction module constructs the transaction set of the training sample specifically: for each image category, respectively constructs the transaction set of the training sample; the construction method includes:

灰度值事务集构建方法:使用每个像素的各个波段的灰度值进行事务集构建;Gray value transaction set construction method: use the gray value of each band of each pixel for transaction set construction;

边缘四方向事务集构建方法:利用canny算子提取影像的边缘,然后提取边缘点像素的多个方向,以邻域为单位,以该邻域内每个方向上的所有像素灰度值的排列构成事务集中的一个事务,则每个边缘点构成4个事务;Edge four-direction transaction set construction method: use the canny operator to extract the edge of the image, and then extract the multiple directions of the edge point pixels, with the neighborhood as the unit, and the arrangement of the gray value of all pixels in each direction in the neighborhood One transaction in the transaction set, each edge point constitutes 4 transactions;

像素四方向事务集构建方法:使用每个像素值四个方向上的灰度值序列构建事务集。Pixel four-direction transaction set construction method: use the sequence of gray values in the four directions of each pixel value to construct a transaction set.

本发明实施例采取的技术方案还包括:所述影像检索模块通过比较待检索影像与影像库中每一幅影像的语义向量的相似性进行影像检索的检索方式为:通过关联规则提取模块分别计算待检索影像和影像库中的所有影像的关联规则,通过向量计算模块将计算结果输入训练模型中,通过所述训练模型输出所有影像的语义向量;所述影像检索模块分别计算待检索影像与影像库中所有影像的语义向量之间的距离,根据语义向量之间的距离计算待检索影像与影像库中所有影像的相似度,并根据相似度返回检索结果。The technical solution adopted by the embodiment of the present invention also includes: the image retrieval module performs image retrieval by comparing the similarity between the image to be retrieved and the semantic vector of each image in the image library: the retrieval method is: respectively calculated by the association rule extraction module The image to be retrieved and the association rules of all images in the image database are input into the training model through the vector calculation module, and the semantic vectors of all images are output through the training model; the image retrieval module calculates the image to be retrieved and the image respectively The distance between the semantic vectors of all the images in the library, calculate the similarity between the image to be retrieved and all the images in the image database according to the distance between the semantic vectors, and return the retrieval results according to the similarity.

本发明实施例采取的技术方案还包括:所述语义向量之间的距离越接近,两幅影像的相似度越高,所述影像检索模块计算相似度计算公式为:The technical solution adopted in the embodiment of the present invention also includes: the closer the distance between the semantic vectors is, the higher the similarity between the two images is, and the similarity calculation formula of the image retrieval module is:

在上述公式中,v1和v2分别为两个语义向量,N是向量的长度。In the above formula, v1 and v2 are two semantic vectors respectively, and N is the length of the vector.

相对于现有技术,本发明实施例产生的有益效果在于:本发明实施例的基于像素级关联规则的遥感图像语义检索方法及装置通过构建事务集,通过事务集提取遥感影像的关联规则,建立关联规则与影像类别之间的训练模型,通过训练模型得到每幅影像的语义向量,然后通过影像的语义向量之间的距离计算待检索影像与影像库中所有影像的相似度,从而实现遥感影像的检索;本发明提供了一种可行的从低层视觉特征到高层语义信息来实现遥感影像检索的新途径,提高了遥感影像检索的准确率。Compared with the prior art, the beneficial effect produced by the embodiment of the present invention lies in that the remote sensing image semantic retrieval method and device based on pixel-level association rules in the embodiment of the present invention construct a transaction set, extract the association rules of remote sensing images through the transaction set, and establish The training model between association rules and image categories obtains the semantic vector of each image through the training model, and then calculates the similarity between the image to be retrieved and all the images in the image database through the distance between the semantic vectors of the image, so as to realize remote sensing image retrieval; the invention provides a feasible new way to realize remote sensing image retrieval from low-level visual features to high-level semantic information, and improves the accuracy of remote sensing image retrieval.

附图说明Description of drawings

图1是本发明实施例的基于像素级关联规则的遥感图像语义检索方法的流程图;Fig. 1 is the flowchart of the remote sensing image semantic retrieval method based on pixel-level association rules according to an embodiment of the present invention;

图2是个边缘点像素4个方向上的像素灰度值示意图;Fig. 2 is a schematic diagram of pixel gray values in four directions of an edge point pixel;

图3(a)为原始遥感影像,图3(b)为压缩后的遥感影像,图3(c)为边缘检测影像;Figure 3(a) is the original remote sensing image, Figure 3(b) is the compressed remote sensing image, and Figure 3(c) is the edge detection image;

图4为本发明实施例的基于像素级关联规则的遥感图像语义检索装置的结构示意图;4 is a schematic structural diagram of a remote sensing image semantic retrieval device based on pixel-level association rules according to an embodiment of the present invention;

图5为8幅房屋待检索影像;Figure 5 shows 8 images of houses to be retrieved;

图6为多波段关联规则检索前24幅返回影像;Figure 6 shows the first 24 images returned by multi-band association rule retrieval;

图7为直方图匹配检索方法前24幅返回影像;Figure 7 is the first 24 returned images of the histogram matching retrieval method;

图8为采用各方法得到的房屋检索结果平均查准率示意图;Fig. 8 is a schematic diagram of the average precision rate of house retrieval results obtained by adopting various methods;

图9为8幅广场待检索影像;Figure 9 shows 8 images of squares to be retrieved;

图10为多波段关联规则检索前24幅返回影像;Figure 10 is the first 24 images returned by multi-band association rule retrieval;

图11为颜色矩检索方法前24幅返回影像;Figure 11 is the first 24 returned images of the color moment retrieval method;

图12为采用各方法得到的广场检索结果平均查准率示意图。Figure 12 is a schematic diagram of the average precision rate of square retrieval results obtained by various methods.

具体实施方式Detailed ways

为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本发明,并不用于限定本发明。In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention.

请参阅图1,是本发明实施例的基于像素级关联规则的遥感图像语义检索方法的流程图。本发明实施例的基于像素级关联规则的遥感图像语义检索方法包括以下步骤:Please refer to FIG. 1 , which is a flowchart of a remote sensing image semantic retrieval method based on pixel-level association rules according to an embodiment of the present invention. The remote sensing image semantic retrieval method based on pixel-level association rules in the embodiment of the present invention includes the following steps:

步骤100:根据指定的影像类别,从每个影像类别中选定一定数量的遥感影像作为训练样本;Step 100: According to the specified image category, select a certain number of remote sensing images from each image category as training samples;

在步骤100中,指定的影像类别包括但不限于草地、林地、湖泊、道路等。In step 100, the specified image category includes but not limited to grassland, woodland, lake, road and so on.

步骤200:对训练样本进行像素灰度级压缩;Step 200: Perform pixel grayscale compression on the training samples;

在步骤200中,以像素为基础的遥感影像有256个灰度级,如果直接利用256个灰度级进行遥感影像的关联规则挖掘,那么计算量是相当庞大的,同时由于灰度级过多,频繁项集的支持度会非常小,不利于提取到支持度和置信度都足够大的关联规则,因此在进行关联规则挖掘之前,需要先对遥感影像进行像素灰度级压缩,将遥感影像压缩至少数几个灰度级,以减小关联规则挖掘的计算量。In step 200, the pixel-based remote sensing image has 256 gray levels. If the 256 gray levels are directly used to mine the association rules of the remote sensing image, the amount of calculation is quite large. , the support degree of frequent itemsets will be very small, which is not conducive to extracting association rules with sufficient support and confidence. Compress at least a few gray levels to reduce the calculation of association rule mining.

在本发明实施例中,训练样本像素灰度级压缩的方法具体为:以邻域均值和方差进行影像压缩,对于遥感影像上每个3*3邻域内的像素,计算该邻域的均值μ和标准差σ,然后利用下式计算该邻域中心像素在压缩后的灰度级:In the embodiment of the present invention, the method for compressing the gray level of training sample pixels is as follows: performing image compression with the neighborhood mean and variance, and calculating the mean value μ of the neighborhood for each pixel in a 3*3 neighborhood on the remote sensing image and standard deviation σ, and then use the following formula to calculate the gray level of the central pixel in the neighborhood after compression:

在公式(1)中,g为中心像素原始灰度级,g'为压缩后的中心像素灰度级,c为比例系数,取值范围在[0.1,0.5]之间。通过该方法,可以将原始遥感影像压缩至0,1,2这3个灰度级。In formula (1), g is the original gray level of the central pixel, g' is the compressed gray level of the central pixel, and c is the proportional coefficient, and the value range is between [0.1, 0.5]. Through this method, the original remote sensing images can be compressed to three gray levels of 0, 1, and 2.

在本发明另一实施例中,根据所述对象的属性采用均匀分段的方式,将各属性量化到[1,G]的范围,具体为:采用平均压缩的方法,将256个灰度级平均分配到若干个灰度级中,In another embodiment of the present invention, according to the attributes of the object, each attribute is quantized to the range of [1, G] by means of uniform segmentation, specifically: using the method of average compression, the 256 gray levels Evenly distributed to several gray levels,

其中G为最大灰度级,G=8,ceil()是向上取整函数,g+1是为了使影像的灰度级被压缩为1~8。Among them, G is the maximum gray level, G=8, ceil() is the function of rounding up, and g+1 is to compress the gray level of the image to 1~8.

根据所述对象的属性采用均匀分段的方式,将各属性量化到[1,G]的范围,具体为,采用线性分段的方法进行压缩,首先计算影像的最大灰度级gMax和最小灰度级gMin,然后利用下式计算压缩后的灰度级:According to the properties of the object, the method of uniform segmentation is used to quantize each attribute to the range of [1, G]. Specifically, the method of linear segmentation is used for compression. First, the maximum gray level gMax and the minimum gray level of the image are calculated. degree level gMin, and then use the following formula to calculate the compressed gray level:

其中G为最大灰度级,G=8。Among them, G is the maximum gray level, G=8.

压缩后的灰度级越多,则进行关联规则挖掘的计算量越大,但反映出的像素之间的关系越接近于真实;反之压缩后的灰度级越少,压缩后像素之间的差异会越小,越不利于挖掘出有意义的关联规则,因此选择一个合适的灰度级数非常重要。本发明实施例中的最大灰度级数选定为8,采用的压缩方式为平均压缩,压缩公式如下:The more gray levels after compression, the greater the amount of calculation for association rule mining, but the closer the relationship between the pixels reflected is to the real; on the contrary, the less gray levels after compression, the greater the relationship between pixels after compression. The smaller the difference, the more unfavorable it is to dig out meaningful association rules, so it is very important to choose an appropriate gray level. The maximum number of gray levels in the embodiment of the present invention is selected as 8, and the compression method adopted is average compression, and the compression formula is as follows:

在公式(4)中,G为最大灰度级数,G=8,ceil()是向上取整函数,g+1是为了使遥感影像的像素灰度级被压缩为1~8。In formula (4), G is the maximum number of gray scales, G=8, ceil() is an upward rounding function, and g+1 is to compress the pixel gray scales of remote sensing images to 1-8.

步骤300:针对每个影像类别,分别构建训练样本的事务集;Step 300: For each image category, construct a transaction set of training samples;

在步骤300中,构建事务集的方法为利用每个像素本身的属性来构建事务集,构建事务集的方法分别包括灰度值事务集构建方法、边缘四方向事务集构建方法以及像素四方向事务集构建方法;其中,灰度值事务集则构建方法具体为:直接使用每个像素的各个波段的灰度值进行事务集构建。例如,如果是RGB三波段影像,则在每个像素上均有R,G,B三个灰度值,以此灰度值构建事务集。In step 300, the method of constructing the transaction set is to use the attributes of each pixel to construct the transaction set. The methods of constructing the transaction set include the construction method of the gray value transaction set, the construction method of the edge four-direction transaction set and the pixel four-direction transaction Set construction method; wherein, the construction method of the gray value transaction set is: directly use the gray value of each band of each pixel to construct the transaction set. For example, if it is an RGB three-band image, there are three gray values of R, G, and B on each pixel, and the transaction set is constructed with these gray values.

IDID item 11 9,7,39, 7, 3 22 8,6,58, 6, 5 33 7,8,97,8,9 44 9,5,89,5,8

边缘四方向事务集构建方法具体为:以邻域为单位,以该邻域内所有像素灰度值的排列作为事务集中的某一个事务,例如3*3邻域内9个像素的灰度值即可构成一个事务。那么对于100*100像素大小的影像,可以构成由98*98=9604个事务组成的事务集,每个事务包含9个项。遥感影像越大,则构成的事务越多,组成的事务集越大;一个事务包含的项越多,则需要计算的频繁项集越大,计算量也就越大,因此需要对每个事务包含的项数做一定的限制。考虑到遥感影像的边缘处包含了大量的有用信息,同时边缘具有方向性,因此本发明首先利用canny算子提取遥感影像的边缘,然后提取边缘点像素的4个方向,以每个方向上的3个像素灰度值作为一个项的元素。对于一个边缘点像素,其4个方向上的像素灰度值具体如图2所示,为一个边缘点像素4个方向上的像素灰度值示意图。The construction method of the edge four-direction transaction set is as follows: the neighborhood is used as the unit, and the arrangement of the gray value of all pixels in the neighborhood is used as a certain transaction in the transaction set, for example, the gray value of 9 pixels in the 3*3 neighborhood is enough constitute a transaction. Then, for an image with a size of 100*100 pixels, a transaction set consisting of 98*98=9604 transactions can be formed, and each transaction contains 9 items. The larger the remote sensing image, the more transactions are formed, and the larger the transaction set is; the more items a transaction contains, the larger the frequent itemset that needs to be calculated, and the greater the amount of calculation, so it is necessary to calculate each transaction The number of items included is limited. Considering that the edge of the remote sensing image contains a lot of useful information, and the edge has directionality, the present invention first uses the canny operator to extract the edge of the remote sensing image, and then extracts the four directions of the edge point pixels, and uses the 3 pixel grayscale values as elements of an item. For an edge point pixel, its pixel gray value in four directions is specifically shown in FIG. 2 , which is a schematic diagram of the pixel gray value in four directions of an edge point pixel.

由于遥感影像上的边缘点像素占整个影像的比例很小,同时每个事务只包含3个项,因此计算量能够明显地降低。具体请参阅图3(a)、图3(b)和图3(c),图3(a)为原始遥感影像,大小为128*128像素;图3(b)为压缩后的遥感影像,为显示方便,每个灰度级均乘以256/G,从压缩影像可以看出,压缩后尽管灰度级数缩小至8个,但遥感影像的内容并没有太大的变化;图3(c)为边缘检测影像,检测出的边缘点数量为1964,因此构成的事务集大小为1964*4=7856,每个事务包含3个项。如果影像内的地物比较丰富,那么检测出的边缘点将更多,则最终构成的事务集将更大。表4-16显示了该事务集中的部分事务。Since the edge point pixels on the remote sensing image account for a small proportion of the entire image, and each transaction only contains 3 items, the amount of calculation can be significantly reduced. For details, please refer to Figure 3(a), Figure 3(b) and Figure 3(c). Figure 3(a) is the original remote sensing image with a size of 128*128 pixels; Figure 3(b) is the compressed remote sensing image. For the convenience of display, each gray level is multiplied by 256/G. It can be seen from the compressed image that although the number of gray levels is reduced to 8 after compression, the content of the remote sensing image has not changed much; Figure 3 ( c) is an edge detection image, and the number of detected edge points is 1964, so the size of the formed transaction set is 1964*4=7856, and each transaction contains 3 items. If there are more features in the image, more edge points will be detected, and the final transaction set will be larger. Table 4-16 shows some of the transactions in this transaction set.

表4-16事务集中的部分项Table 4-16 Some items in the transaction set

像素四方向事务集构建方法具体为:直接使用每个像素四个方向上的灰度值序列构建事务集。The method for constructing the pixel four-direction transaction set specifically includes: directly using the gray value sequence of each pixel in the four directions to construct the transaction set.

步骤400:利用关联规则挖掘算法从训练样本的事务集中分别提取每个影像类别的关联规则;Step 400: using an association rule mining algorithm to extract the association rules of each image category from the transaction set of the training samples;

在步骤400中,由于事务集利用每个像素本身的属性来构建,因此,挖掘出的关联规则是每个像素属性之间的关联规则。提取关联规则的方法具体包括:In step 400, since the transaction set is constructed using the attributes of each pixel, the mined association rules are the association rules among the attributes of each pixel. The methods for extracting association rules specifically include:

首先需要计算频繁项集;计算频繁项集是关联规则挖掘中一个非常关键的步骤,其计算量直接影响了整个关联规则挖掘的计算量。对于由3个元素组成的项,设为[a b c],如果不考虑支持度,其提取的关联规则有下表4-17所示的12条:Firstly, frequent itemsets need to be calculated; calculating frequent itemsets is a very critical step in association rule mining, and its calculation directly affects the calculation of the entire association rule mining. For an item consisting of 3 elements, set it to [a b c], if the support is not considered, the extracted association rules are 12 as shown in the following table 4-17:

表4-17 3个项所生成的关联规则Table 4-17 Association rules generated by 3 items

a=>ba=>b a=>ca=>c b=>cb=>c b=>ab=>a c=>ac=>a c=>bc=>b ab=>cab=>c ac=>bac=>b bc=>abc=>a a=>bca=>bc b=>acb=>ac c=>abc=>ab

由于前6条关联规则仅涉及到两个元素之间的关系,尚不足以表达遥感影像的内容,且会增加关联规则挖掘和后续相似度计算的运算量,因此本发明实施例使用Apriori或者FP-Growth等挖掘关联规则的频繁项集算法从构建的事务集中提取出满足指定置信度与支持度的6条关联规则。Since the first six association rules only involve the relationship between two elements, they are not enough to express the content of remote sensing images, and will increase the amount of calculations for association rule mining and subsequent similarity calculations. Therefore, the embodiment of the present invention uses Apriori or FP -Growth and other frequent itemset algorithms for mining association rules extract 6 association rules that meet the specified confidence and support from the constructed transaction set.

步骤500:使用机器学习算法对关联规则及其所属的影像类别进行训练,建立关联规则与影像类别之间的训练模型;Step 500: use a machine learning algorithm to train the association rules and the image categories to which they belong, and establish a training model between the association rules and the image categories;

在步骤500中,本发明实施例使用支持向量机学习算法进行模型训练,在本发明其他实施例中,机器学习算法还可以包括各种聚类算法、最大期望(EM)算法等。In step 500, the embodiment of the present invention uses a support vector machine learning algorithm for model training. In other embodiments of the present invention, the machine learning algorithm may also include various clustering algorithms, expectation maximum (EM) algorithms, and the like.

步骤600:将影像库中所有影像的关联规则输入训练模型,得到每一幅影像的语义向量;Step 600: Input the association rules of all images in the image library into the training model to obtain the semantic vector of each image;

在步骤600中,首先,根据步骤200至步骤500对待检索影像和影像库中的所有影像分别进行关联规则计算,挖掘出每幅影像的关联规则,然后将关联规则输入步骤500中建立的训练模型中,并分别输出每幅影像属于每个类别的隶属度,将隶属度值构成的向量作为此幅影像内容的语义向量描述。例如,一幅影像的语义向量为(0.8,0.1,0.05,0.05),对应于四个类别(草地,林地,湖泊,道路)。In step 600, firstly, according to step 200 to step 500, the image to be retrieved and all the images in the image database are respectively calculated for association rules, and the association rules of each image are excavated, and then the association rules are input into the training model established in step 500 , and output the membership degree of each image belonging to each category, and use the vector of membership value as the semantic vector description of the image content. For example, the semantic vector of an image is (0.8,0.1,0.05,0.05), corresponding to four categories (grassland, woodland, lake, road).

步骤700:分别计算待检索影像与影像库中所有影像的语义向量之间的距离,根据语义向量之间的距离计算待检索影像与影像库中所有影像的相似度,并返回一定数量的相似度较高的影像作为检索结果;Step 700: Calculate the distances between the semantic vectors of the image to be retrieved and all the images in the image database respectively, calculate the similarity between the image to be retrieved and all the images in the image database according to the distance between the semantic vectors, and return a certain number of similarities Higher images as search results;

在步骤700中,在获取了所有影像的语义向量之后,可以通过计算两幅影像的语义向量之间的距离来计算它们之间的相似度,两幅影像的语义向量之间的距离越接近,则两幅影像的相似度越高。本发明实施例使用KL散度(Kullback–Leibler divergence)一阶近似距离计算影像相似度,计算公式如下:In step 700, after obtaining the semantic vectors of all images, the similarity between them can be calculated by calculating the distance between the semantic vectors of the two images, the closer the distance between the semantic vectors of the two images, The higher the similarity between the two images is. The embodiment of the present invention uses KL divergence (Kullback–Leibler divergence) first-order approximate distance to calculate image similarity, and the calculation formula is as follows:

在公式(5)中,v1和v2分别为两个语义向量,N是向量的长度;在本发明其他实施例中,还可以使用城市街道距离、欧氏距离等各种距离函数计算影像相似度。In formula (5), v1 and v2 are two semantic vectors respectively, and N is the length of the vector; in other embodiments of the present invention, various distance functions such as urban street distance and Euclidean distance can also be used to calculate image similarity .

按照距离从小到大的顺序对影像库中所有影像进行排序,并输出一定数量的返回影像作为检索结果。Sort all images in the image library in ascending order of distance, and output a certain number of returned images as retrieval results.

请参阅图4,是本发明实施例的基于像素级关联规则的遥感图像语义检索装置的结构示意图。本发明实施例的基于像素级关联规则的遥感图像语义检索装置包括样本选择模块、灰度级压缩模块、事务集构建模块、关联规则提取模块、模型训练模块、向量计算模块和影像检索模块。Please refer to FIG. 4 , which is a schematic structural diagram of an apparatus for semantic retrieval of remote sensing images based on pixel-level association rules according to an embodiment of the present invention. The remote sensing image semantic retrieval device based on pixel-level association rules in the embodiment of the present invention includes a sample selection module, a grayscale compression module, a transaction set construction module, an association rule extraction module, a model training module, a vector calculation module and an image retrieval module.

样本选择模块用于根据指定的影像类别,从每个影像类别中选定一定数量的遥感影像作为训练样本;The sample selection module is used to select a certain number of remote sensing images from each image category as training samples according to the specified image category;

灰度级压缩模块用于对训练样本进行像素灰度级压缩;其中,以像素为基础的遥感影像有256个灰度级,如果直接利用256个灰度级进行遥感影像的关联规则挖掘,那么计算量是相当庞大的,同时由于灰度级过多,频繁项集的支持度会非常小,不利于提取到支持度和置信度都足够大的关联规则,因此在进行关联规则挖掘之前,需要先对遥感影像进行像素灰度级压缩,将遥感影像压缩至少数几个灰度级,以减小关联规则挖掘的计算量。The gray level compression module is used to compress the pixel gray level of the training samples; among them, the pixel-based remote sensing image has 256 gray levels, if the 256 gray levels are directly used to mine the association rules of the remote sensing image, then The amount of calculation is quite large. At the same time, due to too many gray levels, the support of frequent itemsets will be very small, which is not conducive to extracting association rules with sufficient support and confidence. Therefore, before association rule mining, it is necessary to First, pixel gray level compression is performed on the remote sensing image, and the remote sensing image is compressed to a few gray levels, so as to reduce the calculation amount of association rule mining.

在本发明实施例中,训练样本像素灰度级压缩的方法具体为:以邻域均值和方差进行影像压缩,对于遥感影像上每个3*3邻域内的像素,计算该邻域的均值μ和标准差σ,然后利用下式计算该邻域中心像素在压缩后的灰度级:In the embodiment of the present invention, the method for compressing the gray level of training sample pixels is as follows: performing image compression with the neighborhood mean and variance, and calculating the mean value μ of the neighborhood for each pixel in a 3*3 neighborhood on the remote sensing image and standard deviation σ, and then use the following formula to calculate the gray level of the central pixel in the neighborhood after compression:

在公式(1)中,g为中心像素原始灰度级,g'为压缩后的中心像素灰度级,c为比例系数,取值范围在[0.1,0.5]之间。通过该方法,可以将原始遥感影像压缩至0,1,2这3个灰度级。In formula (1), g is the original gray level of the central pixel, g' is the compressed gray level of the central pixel, and c is the proportional coefficient, and the value range is between [0.1, 0.5]. Through this method, the original remote sensing images can be compressed to three gray levels of 0, 1, and 2.

在本发明另一实施例中,根据所述对象的属性采用均匀分段的方式,将各属性量化到[1,G]的范围,具体为:采用平均压缩的方法,将256个灰度级平均分配到若干个灰度级中,In another embodiment of the present invention, according to the attributes of the object, each attribute is quantized to the range of [1, G] by means of uniform segmentation, specifically: using the method of average compression, the 256 gray levels Evenly distributed to several gray levels,

其中G为最大灰度级,G=8,ceil()是向上取整函数,g+1是为了使影像的灰度级被压缩为1~8。Among them, G is the maximum gray level, G=8, ceil() is the function of rounding up, and g+1 is to compress the gray level of the image to 1~8.

根据所述对象的属性采用均匀分段的方式,将各属性量化到[1,G]的范围,具体为,采用线性分段的方法进行压缩,首先计算影像的最大灰度级gMax和最小灰度级gMin,然后利用下式计算压缩后的灰度级:According to the properties of the object, the method of uniform segmentation is used to quantize each attribute to the range of [1, G]. Specifically, the method of linear segmentation is used for compression. First, the maximum gray level gMax and the minimum gray level of the image are calculated. degree level gMin, and then use the following formula to calculate the compressed gray level:

其中G为最大灰度级,G=8。Among them, G is the maximum gray level, G=8.

压缩后的灰度级越多,则进行关联规则挖掘的计算量越大,但反映出的像素之间的关系越接近于真实;反之压缩后的灰度级越少,压缩后像素之间的差异会越小,越不利于挖掘出有意义的关联规则,因此选择一个合适的灰度级数非常重要。本发明实施例中的最大灰度级数选定为8,采用的压缩方式为平均压缩,压缩公式如下:The more gray levels after compression, the greater the amount of calculation for association rule mining, but the closer the relationship between the pixels reflected is to the real; on the contrary, the less gray levels after compression, the greater the relationship between pixels after compression. The smaller the difference, the more unfavorable it is to dig out meaningful association rules, so it is very important to choose an appropriate gray level. The maximum number of gray levels in the embodiment of the present invention is selected as 8, and the compression method adopted is average compression, and the compression formula is as follows:

在公式(4)中,G为最大灰度级数,G=8,ceil()是向上取整函数,g+1是为了使遥感影像的像素灰度级被压缩为1~8。In formula (4), G is the maximum number of gray scales, G=8, ceil() is an upward rounding function, and g+1 is to compress the pixel gray scales of remote sensing images to 1-8.

事务集构建模块用于针对每个影像类别,分别构建训练样本的事务集;其中,构建事务集的方法为利用每个像素本身的属性来构建事务集,构建事务集的方法分别包括灰度值事务集构建方法、边缘四方向事务集构建方法以及像素四方向事务集构建方法;其中,灰度值事务集构建方法具体为:直接使用每个像素的各个波段的灰度值进行事务集构建。例如,如果是RGB三波段影像,则在每个像素上均有R,G,B三个灰度值,以此灰度值构建事务集。The transaction set building module is used to construct the transaction set of training samples for each image category; the method of constructing the transaction set is to use the attributes of each pixel to construct the transaction set, and the method of constructing the transaction set includes the gray value The transaction set construction method, the edge four-direction transaction set construction method, and the pixel four-direction transaction set construction method; wherein, the gray value transaction set construction method is: directly use the gray value of each band of each pixel to construct the transaction set. For example, if it is an RGB three-band image, there are three gray values of R, G, and B on each pixel, and the transaction set is constructed with these gray values.

IDID item 11 9,7,39, 7, 3 22 8,6,58, 6, 5 33 7,8,97,8,9 44 9,5,89,5,8

边缘四方向事务集构建方法具体为:以邻域为单位,以该邻域内所有像素灰度值的排列作为事务集中的某一个事务,例如3*3邻域内9个像素的灰度值即可构成一个事务。那么对于100*100像素大小的影像,可以构成由98*98=9604个事务组成的事务集,每个事务包含9个项。遥感影像越大,则构成的事务越多,组成的事务集越大;一个事务包含的项越多,则需要计算的频繁项集越大,计算量也就越大,因此需要对每个事务包含的项数做一定的限制。考虑到遥感影像的边缘处包含了大量的有用信息,同时边缘具有方向性,因此本发明首先利用canny算子提取遥感影像的边缘,然后提取边缘点像素的4个方向,以每个方向上的3个像素灰度值作为一个项的元素。对于一个边缘点像素,其4个方向上的像素灰度值具体如图2所示,为一个边缘点像素4个方向上的像素灰度值示意图。The construction method of the edge four-direction transaction set is as follows: the neighborhood is used as the unit, and the arrangement of the gray value of all pixels in the neighborhood is used as a certain transaction in the transaction set, for example, the gray value of 9 pixels in the 3*3 neighborhood is enough constitute a transaction. Then, for an image with a size of 100*100 pixels, a transaction set consisting of 98*98=9604 transactions can be formed, and each transaction contains 9 items. The larger the remote sensing image, the more transactions are formed, and the larger the transaction set is; the more items a transaction contains, the larger the frequent itemset that needs to be calculated, and the greater the amount of calculation, so it is necessary to calculate each transaction The number of items included is limited. Considering that the edge of the remote sensing image contains a lot of useful information, and the edge has directionality, the present invention first uses the canny operator to extract the edge of the remote sensing image, and then extracts the four directions of the edge point pixels, and uses the 3 pixel grayscale values as elements of an item. For an edge point pixel, its pixel gray value in four directions is specifically shown in FIG. 2 , which is a schematic diagram of the pixel gray value in four directions of an edge point pixel.

由于遥感影像上的边缘点像素占整个影像的比例很小,同时每个事务只包含3个项,因此计算量能够明显地降低。具体请参阅图3(a)、图3(b)和图3(c),图3(a)为原始遥感影像,大小为128*128像素;图3(b)为压缩后的遥感影像,为显示方便,每个灰度级均乘以256/G,从压缩影像可以看出,压缩后尽管灰度级数缩小至8个,但遥感影像的内容并没有太大的变化;图3(c)为边缘检测影像,检测出的边缘点数量为1964,因此构成的事务集大小为1964*4=7856,每个事务包含3个项。如果影像内的地物比较丰富,那么检测出的边缘点将更多,则最终构成的事务集将更大。表4-16显示了该事务集中的部分事务。Since the edge point pixels on the remote sensing image account for a small proportion of the entire image, and each transaction only contains 3 items, the amount of calculation can be significantly reduced. For details, please refer to Figure 3(a), Figure 3(b) and Figure 3(c). Figure 3(a) is the original remote sensing image with a size of 128*128 pixels; Figure 3(b) is the compressed remote sensing image. For the convenience of display, each gray level is multiplied by 256/G. It can be seen from the compressed image that although the number of gray levels is reduced to 8 after compression, the content of the remote sensing image has not changed much; Figure 3 ( c) is an edge detection image, and the number of detected edge points is 1964, so the size of the formed transaction set is 1964*4=7856, and each transaction contains 3 items. If there are more features in the image, more edge points will be detected, and the final transaction set will be larger. Table 4-16 shows some of the transactions in this transaction set.

表4-16事务集中的部分项Table 4-16 Some items in the transaction set

像素四方向事务集构建方法具体为:直接使用每个像素值四个方向上的灰度值序列构建事务集。The method for constructing the pixel four-direction transaction set specifically includes: directly constructing the transaction set by directly using the sequence of gray values in the four directions of each pixel value.

关联规则提取模块用于利用关联规则挖掘算法从训练样本的事务集中分别提取每个影像类别的关联规则;其中,由于事务集利用每个像素本身的属性来构建,因此,挖掘出的关联规则是每个像素属性之间的关联规则。The association rule extraction module is used to use the association rule mining algorithm to extract the association rules of each image category from the transaction set of the training samples; where, since the transaction set is constructed using the attributes of each pixel itself, the mined association rules are Association rules between each pixel attribute.

提取关联规则的方法具体包括:首先需要计算频繁项集;计算频繁项集是关联规则挖掘中一个非常关键的步骤,其计算量直接影响了整个关联规则挖掘的计算量。对于由3个元素组成的项,设为[a b c],如果不考虑支持度,其提取的关联规则有下表4-17所示的12条:The method of extracting association rules specifically includes: firstly, frequent item sets need to be calculated; calculating frequent item sets is a very critical step in association rule mining, and its calculation amount directly affects the calculation amount of the entire association rule mining. For an item consisting of 3 elements, set it to [a b c], if the support is not considered, the extracted association rules are 12 as shown in the following table 4-17:

表4-17 3个项所生成的关联规则Table 4-17 Association rules generated by 3 items

a=>ba=>b a=>ca=>c b=>cb=>c b=>ab=>a c=>ac=>a c=>bc=>b ab=>cab=>c ac=>bac=>b bc=>abc=>a a=>bca=>bc b=>acb=>ac c=>abc=>ab

由于前6条关联规则仅涉及到两个元素之间的关系,尚不足以表达遥感影像的内容,且会增加关联规则挖掘和后续相似度计算的运算量,因此本发明实施例使用Apriori或者FP-Growth等挖掘关联规则的频繁项集算法从构建的事务集中提取出满足指定置信度与支持度的6条关联规则。Since the first six association rules only involve the relationship between two elements, they are not enough to express the content of remote sensing images, and will increase the amount of calculations for association rule mining and subsequent similarity calculations. Therefore, the embodiment of the present invention uses Apriori or FP -Growth and other frequent itemset algorithms for mining association rules extract 6 association rules that meet the specified confidence and support from the constructed transaction set.

模型训练模块用于使用机器学习算法对关联规则及其所属的影像类别进行训练,建立关联规则与影像类别之间的训练模型;其中,本发明实施例使用支持向量机学习算法进行模型训练,在本发明其他实施例中,机器学习算法还可以包括各种聚类算法、最大期望(EM)算法等。The model training module is used to use the machine learning algorithm to train the association rules and the image categories to which they belong, and to establish a training model between the association rules and the image categories; wherein, the embodiment of the present invention uses a support vector machine learning algorithm for model training. In other embodiments of the present invention, the machine learning algorithm may also include various clustering algorithms, maximum expectation (EM) algorithms, and the like.

向量计算模块用于将所有影像的关联规则输入训练模型,得到每幅影像的语义向量;首先,对待检索影像和影像库中的所有影像分别进行关联规则计算,挖掘出每幅影像的关联规则,然后将关联规则输入模型训练模块建立的训练模型中,并分别输出每幅影像属于每个类别的隶属度,将隶属度值构成的向量作为此幅影像内容的语义向量描述。例如,一幅影像的语义向量为(0.8,0.1,0.05,0.05),对应于四个类别(草地,林地,湖泊,道路)。The vector calculation module is used to input the association rules of all images into the training model to obtain the semantic vector of each image; firstly, the association rule calculation is performed on the image to be retrieved and all the images in the image library, and the association rules of each image are mined. Then input the association rules into the training model established by the model training module, and output the degree of membership of each image belonging to each category, and use the vector formed by the value of the degree of membership as the semantic vector description of the image content. For example, the semantic vector of an image is (0.8,0.1,0.05,0.05), corresponding to four categories (grassland, woodland, lake, road).

影像检索模块用于分别计算待检索影像与影像库中所有影像的语义向量之间的距离,根据语义向量之间的距离计算待检索影像与影像库中所有影像的相似度,并返回一定数量的相似度较高的影像作为检索结果;其中,在获取了所有影像的语义向量之后,可以通过计算两幅影像的语义向量之间的距离来计算它们之间的相似度,两幅影像的语义向量之间的距离越接近,则两幅影像的相似度越高。本发明实施例使用KL散度(Kullback–Leibler divergence)一阶近似距离计算影像相似度,计算公式如下:The image retrieval module is used to calculate the distance between the image to be retrieved and the semantic vectors of all the images in the image library, calculate the similarity between the image to be retrieved and all the images in the image library according to the distance between the semantic vectors, and return a certain number of The images with high similarity are used as the retrieval results; among them, after obtaining the semantic vectors of all images, the similarity between them can be calculated by calculating the distance between the semantic vectors of the two images, and the semantic vectors of the two images The closer the distance is, the higher the similarity between the two images. The embodiment of the present invention uses KL divergence (Kullback–Leibler divergence) first-order approximate distance to calculate image similarity, and the calculation formula is as follows:

在公式(5)中,v1和v2分别为两个语义向量,N是向量的长度;在本发明其他实施例中,还可以使用城市街道距离、欧氏距离等各种距离函数计算影像相似度。In formula (5), v1 and v2 are two semantic vectors respectively, and N is the length of the vector; in other embodiments of the present invention, various distance functions such as urban street distance and Euclidean distance can also be used to calculate image similarity .

按照距离从小到大的顺序对影像库中所有影像进行排序,并输出一定数量的返回影像作为检索结果。Sort all images in the image library in ascending order of distance, and output a certain number of returned images as retrieval results.

为了验证本发明的有效性,通过以下实施例对不同传感器的遥感影像进行了检索实验,并将本发明的基于像素级关联规则的遥感图像语义检索方法与现有的直方图匹配、颜色矩、Gabor小波、DT-CWT(Dual-Tree Complex Wavelet Transform)等检索方法进行比较。In order to verify the effectiveness of the present invention, retrieval experiments are carried out on remote sensing images of different sensors through the following examples, and the remote sensing image semantic retrieval method based on pixel-level association rules of the present invention is matched with the existing histogram matching, color moments, Gabor wavelet, DT-CWT (Dual-Tree Complex Wavelet Transform) and other retrieval methods were compared.

WorldView-2影像检索实验WorldView-2 Image Retrieval Experiment

WorldView-2影像与QuickBird影像类似,采用遥感影像融合方法将全色和红绿蓝三波段组成的真彩色影像进行融合,采用不重叠分块的方法建立影像库,影像库包含3250幅分块影像,同时影像上的地物类型更多,为方便统计检索精度,本发明实施例仅选择房屋和广场这两类易区分的地物,每类地物随机选择8幅分块影像,以这8幅影像作为待检索影像。由于不知道影像库中每类影像的具体数目,因此无法使用查全率、漏检率等指标,而前N幅影像的平均查准率能够反映检索算法的检索性能,同时兼顾到用户的体验,因此本发明实施例使用前64幅影像的平均查准率来衡量各检索算法的性能。计算时,分别统计前8、前16、前24、前32、前40、前48、前56、前64幅返回影像中的正确影像,取8幅影像的平均查准率作为最终的查准率。以下是两类地物以及对应的检索结果:The WorldView-2 image is similar to the QuickBird image. The remote sensing image fusion method is used to fuse the true color image composed of panchromatic and red, green and blue bands. The non-overlapping block method is used to build the image library. The image library contains 3250 block images , and there are more types of features on the image. In order to facilitate statistical retrieval accuracy, the embodiment of the present invention only selects two types of easily distinguishable features, houses and squares, and randomly selects 8 block images for each type of feature, and uses these 8 images as the images to be retrieved. Since the specific number of images of each type in the image database is not known, indicators such as recall rate and missed detection rate cannot be used, and the average precision rate of the first N images can reflect the retrieval performance of the retrieval algorithm, while taking into account the user experience , so the embodiment of the present invention uses the average precision rate of the first 64 images to measure the performance of each retrieval algorithm. When calculating, the correct images in the first 8, first 16, first 24, first 32, first 40, first 48, first 56, and first 64 returned images were counted respectively, and the average precision rate of the 8 images was taken as the final precision Rate. The following are the two types of features and the corresponding search results:

(1)房屋(1) Housing

如图5所示,为8幅房屋待检索影像。影像库中的房屋包括比较大的独栋房体,有可能跨越两个分块影像,也包括密集型的小房屋,在检索时不做区分,统一归为房屋一类。房屋的屋顶一般比较黑,周围可能有绿色的树木和深黑色的阴影。限于篇幅,本发明实施例的多波段关联规则检索方法和直方图匹配法的前24幅返回影像分别如图6和图7所示,图6为多波段关联规则检索前24幅返回影像,图7为直方图匹配检索方法前24幅返回影像。As shown in Figure 5, there are 8 house images to be retrieved. The houses in the image library include relatively large single-family houses, which may span two block images, and also include dense small houses. They are not distinguished during retrieval and are collectively classified as houses. Houses generally have darker roofs and may have surrounding green trees and deep black shadows. Due to space limitations, the first 24 returned images of the multi-band association rule retrieval method and the histogram matching method in the embodiment of the present invention are shown in Figure 6 and Figure 7 respectively. Figure 6 is the first 24 returned images of the multi-band association rule retrieval, and 7 is the first 24 images returned by the histogram matching retrieval method.

请参阅图8,为采用各方法得到的房屋检索结果平均查准率示意图。从图8可以看出,本发明实施例的多波段关联规则检索方法的平均查准率比较稳定,且均高于其它方法。由于房屋及其周边地物的颜色和纹理信息比较明显,所以其它方法的平均查准率也比较稳定。Gabor小波的平均查准率最低,是因为房屋结构比较多样化,在各个尺度和方向上的信息不一致。Please refer to Figure 8, which is a schematic diagram of the average precision rate of house retrieval results obtained by various methods. It can be seen from FIG. 8 that the average precision rate of the multi-band association rule retrieval method according to the embodiment of the present invention is relatively stable and higher than other methods. Since the color and texture information of the house and its surrounding features are relatively obvious, the average precision of other methods is also relatively stable. The average precision rate of Gabor wavelet is the lowest, because the housing structure is relatively diverse, and the information in each scale and direction is inconsistent.

(2)广场(2) Square

如图9所示,为8幅广场待检索影像。广场的亮度比较高,分布比较规则,包含的地物比较单一,可能会有树木和草坪环绕。本发明实施例的多波段关联规则检索方法和颜色矩检索方法的前24幅返回影像分别如图10和图11所示,图10为多波段关联规则检索前24幅返回影像,图11为颜色矩检索方法前24幅返回影像。As shown in Figure 9, there are 8 square images to be retrieved. The brightness of the square is relatively high, the distribution is relatively regular, and the ground objects contained in it are relatively simple, and may be surrounded by trees and lawns. The first 24 returned images of the multi-band association rule retrieval method and the color moment retrieval method of the embodiment of the present invention are shown in Figure 10 and Figure 11 respectively, Figure 10 is the first 24 returned images of the multi-band association rule retrieval, and Figure 11 is the color The first 24 images returned by the moment retrieval method.

请参阅图12,为采用各方法得到的广场检索结果平均查准率示意图。由于广场的灰度值比较统一,地物内容不确定,因此颜色矩特征和纹理特征不明显,从而导致直方图匹配的平均查准率要高于Gabor小波、DT-CWT和颜色矩,但均低于本发明实施例的的多波段关联规则方法。本发明实施例的多波段关联规则方法的平均查准率比较接近,因为对于广场,灰度值比较统一,可以得到比较好的检索结果。Please refer to Figure 12, which is a schematic diagram of the average precision rate of square retrieval results obtained by various methods. Because the gray value of the square is relatively uniform and the content of ground objects is uncertain, the color moment feature and texture feature are not obvious, resulting in a higher average precision of histogram matching than Gabor wavelet, DT-CWT and color moment, but both The multi-band association rule method of the embodiment of the present invention is below. The average precision rate of the multi-band association rule method in the embodiment of the present invention is relatively close, because for the square, the gray value is relatively uniform, and relatively good retrieval results can be obtained.

本发明实施例的基于像素级关联规则的遥感图像语义检索方法及装置通过构建事务集,通过事务集提取遥感影像的关联规则,建立关联规则与影像类别之间的训练模型,通过训练模型得到待检索影像的语义向量,然后通过两幅影像的语义向量之间的距离计算待检索影像与影像库中所有影像的相似度,从而实现遥感影像的检索;本发明提供了一种可行的从低层视觉特征到高层语义信息来实现遥感影像检索的新途径,提高了遥感影像检索的准确率。The remote sensing image semantic retrieval method and device based on pixel-level association rules in the embodiment of the present invention constructs a transaction set, extracts the association rules of remote sensing images through the transaction set, establishes a training model between association rules and image categories, and obtains Retrieve the semantic vector of the image, and then calculate the similarity between the image to be retrieved and all the images in the image database through the distance between the semantic vectors of the two images, so as to realize the retrieval of remote sensing images; the present invention provides a feasible low-level visual A new way to realize remote sensing image retrieval from features to high-level semantic information, which improves the accuracy of remote sensing image retrieval.

对所公开的实施例的上述说明,使本领域专业技术人员能够实现或使用本发明。对这些实施例的多种修改对本领域的专业技术人员来说将是显而易见的,本文中所定义的一般原理可以在不脱离本发明的精神或范围的情况下,在其它实施例中实现。因此,本发明将不会被限制于本文所示的这些实施例,而是要符合与本文所公开的原理和新颖特点相一致的最宽的范围。The above description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be implemented in other embodiments without departing from the spirit or scope of the invention. Therefore, the present invention will not be limited to the embodiments shown herein, but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (8)

1. a kind of remote sensing images semantic retrieving method based on Pixel-level correlation rule, which comprises the following steps:
Step a: the transaction set of training sample is constructed;
Step b: the correlation rule of training sample is extracted from the transaction set of the training sample, and according to the training sample Correlation rule establishes training pattern;
Step c: calculating the correlation rule of each width image in Image Database, and the correlation rule of each width image is inputted training Model obtains the semantic vector of each width image, by comparing in image to be retrieved and Image Database each width image it is semantic to The similitude of amount carries out video search;
In the step a, the transaction set of the building training sample specifically: be directed to each image classification, respectively building instruction Practice the transaction set of sample;The construction method includes:
Gray value transaction set construction method: transaction set building is carried out using the gray value of each wave band of each pixel;
Four direction transaction set construction method of edge: using the edge of canny operator extraction image, marginal point pixel is then extracted Multiple directions, as unit of neighborhood, with being arranged to make up in transaction set for all pixels gray value in the neighborhood in each direction An affairs, then each marginal point constitutes 4 affairs;
Four direction transaction set construction method of pixel: transaction set is constructed using the gray value sequence on each pixel value four direction.
2. the remote sensing images semantic retrieving method according to claim 1 based on Pixel-level correlation rule, which is characterized in that The step a further include: according to specified image classification, a certain number of remote sensing image conducts are selected from each image classification Training sample, and pixel grayscale compression is carried out to the training sample.
3. the remote sensing images semantic retrieving method according to claim 2 based on Pixel-level correlation rule, which is characterized in that In the step c, the similitude by comparing the semantic vector of each width image in image to be retrieved and Image Database into The retrieval mode of row video search are as follows: the correlation rule for calculating separately all images in image and Image Database to be retrieved will be counted It calculates result to input in the training pattern, the semantic vector of all images is exported by the training pattern;It calculates separately to be checked The distance between the semantic vector of all images in rope image and Image Database calculates to be retrieved according to the distance between semantic vector The similarity of all images in image and Image Database, and search result is returned to according to similarity.
4. the remote sensing images semantic retrieving method according to claim 3 based on Pixel-level correlation rule, which is characterized in that In the step c, the distance between described semantic vector is closer, and the similarity of two width images is higher, the similarity meter Calculate formula are as follows:
In above-mentioned formula, v1 and v2 are respectively two semantic vectors, and N is the length of vector.
5. a kind of remote sensing images semantic retrieval device based on Pixel-level correlation rule characterized by comprising
Transaction set constructs module: for constructing the transaction set of training sample;
Correlation rule extraction module: for extracting the correlation rule of training sample from the transaction set of the training sample;
Model training module: for establishing training pattern according to the correlation rule of the training sample;
Vector calculation module: for calculating the correlation rule of each width image in Image Database, by the association of each width image Rule input training pattern, obtains the semantic vector of each width image;
Video search module: for the similitude by comparing the semantic vector of each width image in image to be retrieved and Image Database Carry out video search;
Transaction set constructs the transaction set of module building training sample specifically: is directed to each image classification, constructs training sample respectively This transaction set;Construction method includes:
Gray value transaction set construction method: transaction set building is carried out using the gray value of each wave band of each pixel;
Four direction transaction set construction method of edge: using the edge of canny operator extraction image, marginal point pixel is then extracted Multiple directions, as unit of neighborhood, be arranged to make up in transaction set a affairs for all pixels gray value in the neighborhood, then Each marginal point constitutes 4 affairs;
Four direction transaction set construction method of pixel: transaction set is constructed using the gray value sequence on each pixel value four direction.
6. the remote sensing images semantic retrieval device according to claim 5 based on Pixel-level correlation rule, which is characterized in that Further include sample selection module and gray-scale compression module: the sample selection module is used for the specified image classification of basis, from A certain number of remote sensing images are selected in each image classification as training sample;The gray-scale compression module is used for described Training sample carries out pixel grayscale compression.
7. the remote sensing images semantic retrieval device according to claim 6 based on Pixel-level correlation rule, which is characterized in that The video search module is carried out by comparing the similitude of the semantic vector of each width image in image to be retrieved and Image Database The retrieval mode of video search are as follows: all shadows in image and Image Database to be retrieved are calculated separately by correlation rule extraction module Calculated result is inputted in training pattern by vector calculation module, exports institute by the training pattern by the correlation rule of picture There is the semantic vector of image;The video search module calculate separately all images in image to be retrieved and Image Database it is semantic to The distance between amount calculates the similarity of all images in image and Image Database to be retrieved according to the distance between semantic vector, And search result is returned to according to similarity.
8. the remote sensing images semantic retrieval device according to claim 7 based on Pixel-level correlation rule, which is characterized in that The distance between the semantic vector is closer, and the similarity of two width images is higher, and the video search module calculates similarity Calculation formula are as follows:
In above-mentioned formula, v1 and v2 are respectively two semantic vectors, and N is the length of vector.
CN201610958131.XA 2016-11-02 2016-11-02 Remote sensing image semantic retrieval method and device based on pixel level association rule Active CN106570136B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610958131.XA CN106570136B (en) 2016-11-02 2016-11-02 Remote sensing image semantic retrieval method and device based on pixel level association rule

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610958131.XA CN106570136B (en) 2016-11-02 2016-11-02 Remote sensing image semantic retrieval method and device based on pixel level association rule

Publications (2)

Publication Number Publication Date
CN106570136A CN106570136A (en) 2017-04-19
CN106570136B true CN106570136B (en) 2019-10-29

Family

ID=58535522

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610958131.XA Active CN106570136B (en) 2016-11-02 2016-11-02 Remote sensing image semantic retrieval method and device based on pixel level association rule

Country Status (1)

Country Link
CN (1) CN106570136B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113095368B (en) * 2021-03-16 2022-06-28 武汉纺织大学 A spectral color representative sample selection method and system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102073748A (en) * 2011-03-08 2011-05-25 武汉大学 Visual keyword based remote sensing image semantic searching method
CN103399863A (en) * 2013-06-25 2013-11-20 西安电子科技大学 Image retrieval method based on edge direction difference characteristic bag

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102073748A (en) * 2011-03-08 2011-05-25 武汉大学 Visual keyword based remote sensing image semantic searching method
CN103399863A (en) * 2013-06-25 2013-11-20 西安电子科技大学 Image retrieval method based on edge direction difference characteristic bag

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
基于关联规则挖掘的图像检索;周易;《软件》;20120430;第33卷(第4期);第28-30页 *
基于关联规则的面向对象高分辨率影像分类;张扬等;《遥感技术与应用》;20120630;第27卷(第3期);第339-346页 *
高分辨率遥感影像特征基元关联规则挖掘研究;吴显明;《中国优秀硕士学位论文全文数据库信息科技辑》;20140315;全文 *

Also Published As

Publication number Publication date
CN106570136A (en) 2017-04-19

Similar Documents

Publication Publication Date Title
Wang et al. Visual saliency guided complex image retrieval
CN102073748B (en) Visual keyword based remote sensing image semantic searching method
US7848577B2 (en) Image processing methods, image management systems, and articles of manufacture
Feng et al. Attention-driven salient edge (s) and region (s) extraction with application to CBIR
CN104966085B (en) A kind of remote sensing images region of interest area detecting method based on the fusion of more notable features
CN102750385B (en) Correlation-quality sequencing image retrieval method based on tag retrieval
CN107291855A (en) A kind of image search method and system based on notable object
CN103377376A (en) Method and system for image classification, and method and system for image retrieval
CN107341505B (en) A Scene Classification Method Based on Image Saliency and Object Bank
Varish A modified similarity measurement for image retrieval scheme using fusion of color, texture and shape moments
CN110363236B (en) Hyperspectral Image Extreme Learning Machine Clustering Method Based on Space Spectrum Joint Hypergraph Embedding
CN103336835B (en) Image retrieval method based on weight color-sift characteristic dictionary
CN103399863B (en) Image search method based on the poor characteristic bag of edge direction
CN108319959A (en) A kind of corps diseases image-recognizing method compressed based on characteristics of image with retrieval
CN117115641A (en) Building information extraction method and device, electronic equipment and storage medium
Sebai et al. Dual-tree complex wavelet transform applied on color descriptors for remote-sensed images retrieval
Liu et al. RGB-D action recognition using linear coding
Wan et al. An approach for image retrieval based on visual saliency
Dandotiya et al. Image retrieval using edge detection, RLBP, color moment method for YCbCr and HSV color space
CN106570136B (en) Remote sensing image semantic retrieval method and device based on pixel level association rule
CN106570125B (en) Remote sensing image retrieval method and device for rotational scaling and translation invariance
CN106570127B (en) Remote sensing image retrieval method and system based on object attribute association rule
CN106570123B (en) Remote sensing image retrieval method and system based on adjacent object association rule
CN106570124B (en) Object-level association rule-based remote sensing image semantic retrieval method and system
CN106570137B (en) remote sensing image retrieval method and device based on pixel association rule

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant