CN103605984B - Indoor scene sorting technique based on hypergraph study - Google Patents
Indoor scene sorting technique based on hypergraph study Download PDFInfo
- Publication number
- CN103605984B CN103605984B CN201310566625.XA CN201310566625A CN103605984B CN 103605984 B CN103605984 B CN 103605984B CN 201310566625 A CN201310566625 A CN 201310566625A CN 103605984 B CN103605984 B CN 103605984B
- Authority
- CN
- China
- Prior art keywords
- image
- hypergraph
- linear regression
- regression model
- semi
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Landscapes
- Image Analysis (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
基于超图学习的室内场景分类方法,涉及室内场景分类。使用近百个目标检测子从图像中抽取出目标,根据形成的目标描述符组成的超级描述符作为图像的特征描述符;使用K近邻方法对图像描述符构建超图,计算出其拉普拉斯矩阵,构建半监督学习框架;构建一个线性回归模型,并将该线性回归模型加入到半监督学习框架内;依据所构建的半监督学习框架,并结合所提取的图像的特征描述符,对部分图像描述符进行标注,使得该半监督学习框能够自动迭代地预测出未标注图像的标签,从而完成图像分类,同时,线性回归模型在自动迭代过程中被初始化;依据线性回归模型,并结合所提取的图像的特征描述符,可对新加入的数据直接进行图像分类,而无须再次构建超图。
An indoor scene classification method based on hypergraph learning, which involves indoor scene classification. Use nearly a hundred target detectors to extract the target from the image, and use the super descriptor composed of the formed target descriptor as the feature descriptor of the image; use the K nearest neighbor method to construct a hypergraph for the image descriptor, and calculate its Lapla Adams matrix to construct a semi-supervised learning framework; construct a linear regression model, and add the linear regression model to the semi-supervised learning framework; Part of the image descriptor is marked, so that the semi-supervised learning frame can automatically and iteratively predict the label of the unlabeled image, so as to complete the image classification. At the same time, the linear regression model is initialized during the automatic iteration process; according to the linear regression model, combined with The feature descriptor of the extracted image can directly classify the newly added data without constructing the hypergraph again.
Description
技术领域technical field
本发明涉及室内场景分类,尤其是涉及一种基于超图学习的室内场景分类方法。The invention relates to indoor scene classification, in particular to an indoor scene classification method based on hypergraph learning.
背景技术Background technique
目前,室内场景分类一般采用低层次的特征描述符,主要包括色彩、纹理、形状等信息。这些低层次的特征描述符对室外场景分类有较好的效果,然而由于室内场景复杂的物体种类和重叠,因而在室内场景分类效果上表现一般。随着相关技术的发展,有一些改进的图像特征描述符被引进用来改进图像的分类效果,如金字塔匹配因子([1]S.Lazebnik,C.Schmid,andJ.Ponce,“Beyond bags of features:Spatial pyramid matching for recognizing natural scenecategories,”in Proc.IEEE Int.Conf.Computer Vision and Pattern Recognition,2006,vol.2,pp.2169–2178)、全局描述子([2]C.Siagian and L.Itti,“Rapid biologically-inspired sceneclassification using features shared with visual attention,”IEEE Trans.Pattern Anal.Mach.Intell.,vol.29,no.2,pp.300–312,Feb.2007)等等,然而这些改进的图像特征描述由于没有解决室内场景图像的核心问题,并不能大幅度地提高室内场景的分类效果。采用高层次的包含图像语义的特征描述符,由于保存了图像大量的语义,能够识别出室内场景中多种物体,对提高室内场景图像分类效果有着重要作用。At present, indoor scene classification generally uses low-level feature descriptors, which mainly include information such as color, texture, and shape. These low-level feature descriptors perform well for outdoor scene classification, but perform poorly for indoor scene classification due to the complex object types and overlaps in indoor scenes. With the development of related technologies, some improved image feature descriptors have been introduced to improve the classification effect of images, such as pyramid matching factors ([1] S.Lazebnik, C.Schmid, and J.Ponce, "Beyond bags of features :Spatial pyramid matching for recognizing natural scenecategories,” in Proc.IEEE Int.Conf.Computer Vision and Pattern Recognition, 2006, vol.2, pp.2169–2178), global descriptors ([2]C.Siagian and L. Itti, "Rapid biologically-inspired scene classification using features shared with visual attention," IEEE Trans.Pattern Anal.Mach.Intell., vol.29, no.2, pp.300–312, Feb.2007) etc., however The improved image feature description cannot greatly improve the classification effect of indoor scenes because it does not solve the core problem of indoor scene images. Using a high-level feature descriptor that includes image semantics, because it preserves a large amount of image semantics, it can recognize a variety of objects in indoor scenes, which plays an important role in improving the classification effect of indoor scene images.
在用高层次图像描述符中,早期的研究提出了采用一系列的图像语义属性来描述图像信息,这些描述图像的方法在图像获取以及图像分类领域取得不错的效果。斯坦福大学实验室也提出一个新的做超级描述符([3]L.Li,H.Su,E.Xing and F.Li,“Object Bank:A High-LevelImage Representation for Scene Classification and Semantic Feature Sparsification,”Proceedingsof the Neural Information Processing Systems(NIPS),2010)来描述图像,这种图像描述符在描述具有复杂物体的类的图像,尤其是室内图像上具有较好的描述效果。然而这些图像分类仍然采用常用的全监督方法来进行分类,不能够综合考虑到所有数据的全局属性信息和局部数据信息之间的关系,所以在图像分类效果上表现非常一般。In the use of high-level image descriptors, early research proposed to use a series of image semantic attributes to describe image information. These methods of describing images have achieved good results in the fields of image acquisition and image classification. The Stanford University laboratory also proposed a new super descriptor ([3] L.Li, H.Su, E.Xing and F.Li, "Object Bank: A High-Level Image Representation for Scene Classification and Semantic Feature Sparsification, "Proceedings of the Neural Information Processing Systems (NIPS), 2010) to describe images, this image descriptor has a better description effect on describing images with complex objects, especially indoor images. However, these image classifications still use the commonly used fully supervised method for classification, which cannot comprehensively consider the relationship between the global attribute information and local data information of all data, so the performance in image classification is very general.
发明内容Contents of the invention
本发明的目的在于提供一种基于超图学习的室内场景分类方法。The purpose of the present invention is to provide a method for classifying indoor scenes based on hypergraph learning.
本发明包括以下步骤:The present invention comprises the following steps:
(1)使用近百个目标检测子从图像中抽取出目标,再根据形成的目标描述符组成的一个超级描述符,作为图像的特征描述符;(1) Use nearly a hundred target detectors to extract the target from the image, and then form a super descriptor based on the formed target descriptor as the feature descriptor of the image;
(2)使用K近邻方法对所有生成的图像描述符构建超图,并基于生成的超图计算出其拉普拉斯矩阵,进而构建半监督学习框架;(2) Use the K nearest neighbor method to construct a hypergraph for all generated image descriptors, and calculate its Laplacian matrix based on the generated hypergraph, and then construct a semi-supervised learning framework;
(3)构建一个线性回归模型,并将该线性回归模型加入到半监督学习框架内;(3) Construct a linear regression model, and add this linear regression model in the semi-supervised learning framework;
(4)依据步骤(3)中所构建的半监督学习框架,并结合步骤(1)所提取的图像的特征描述符,对部分图像描述符进行标注,使得该半监督学习框能够自动迭代地预测出未标注图像的标签,从而完成图像分类,同时,步骤(3)中的线性回归模型在自动迭代过程中被初始化;(4) According to the semi-supervised learning framework constructed in step (3), combined with the feature descriptors of the image extracted in step (1), mark some image descriptors, so that the semi-supervised learning frame can be automatically iterated Predict the label of the unlabeled image, thereby completing the image classification, and at the same time, the linear regression model in step (3) is initialized in the automatic iteration process;
(5)依据步骤(3)中的线性回归模型,并结合步骤(1)所提取的图像的特征描述符,可以对新加入的数据直接进行图像分类,而无须再次构建超图。(5) According to the linear regression model in step (3), combined with the feature descriptors of the image extracted in step (1), the newly added data can be directly classified into images without constructing a hypergraph again.
在步骤(2)中,所述构建半监督学习框架的具体方法可为:In step (2), the concrete method of described construction semi-supervised learning framework can be:
首先计算出提取的图像的特征描述符两两之间的欧氏距离,并以此得到相关矩阵H:First, calculate the Euclidean distance between the feature descriptors of the extracted images, and obtain the correlation matrix H:
其中υ表示超图的结点,e表示超图的边;Where υ represents the node of the hypergraph, and e represents the edge of the hypergraph;
进而可以计算超图中每条边的权重w(e)、每个节点的度数d(υ)和每条超边的度数δ(e),同时可使用w(e)、d(υ)和δ(e)作为对角元素构造其相关的对角矩阵W、Dυ和De,根据这三个对角矩阵及相关矩阵,可以计算得到中间结果Θ为:In turn, the weight w(e) of each edge in the hypergraph, the degree d(υ) of each node and the degree δ(e) of each hyperedge can be calculated, and w(e), d(υ) and δ(e) is used as a diagonal element to construct its related diagonal matrix W, D υ and D e . According to these three diagonal matrices and correlation matrix, the intermediate result Θ can be calculated as:
使用单位矩阵I减去Θ则可得:Subtracting Θ using the identity matrix I gives:
L=I-ΘL=I-Θ
计算结果L即表示该超图的拉普拉斯矩阵;基于该拉普拉斯矩阵可以构建出半监督学习框架的正则化项:The calculation result L represents the Laplacian matrix of the hypergraph; based on the Laplacian matrix, the regularization term of the semi-supervised learning framework can be constructed:
Ω(f)=fTLfΩ(f)=f T Lf
其中f表示需要预测图像的标签向量,fT表示向量f的转置向量;进而构造出半监督框架,其公式如下:Where f represents the label vector of the image to be predicted, and f T represents the transposed vector of the vector f; then a semi-supervised framework is constructed, and the formula is as follows:
其中Y表示对图像进行标注的矩阵,tr表示计算矩阵的迹,λ参数是一个非负的数,控制着模型复杂度和经验损失函数之间的平衡;通过计算该公式,可以得到全部数据的预测标签F。Where Y represents the matrix that labels the image, tr represents the trace of the calculation matrix, and the λ parameter is a non-negative number that controls the balance between the model complexity and the empirical loss function; by calculating this formula, you can get all the data. Predict label F.
在步骤(3)中,所述线性回归模型,其作用是对新加入的数据,能够使用该线性回归模型直接进行图像分类,而无须再次构建超图;线性回归模型公式如下:In step (3), described linear regression model, its effect is to the data newly added, can use this linear regression model to directly carry out image classification, and needn't construct hypergraph again; Linear regression model formula is as follows:
g(x)=QTx+θg(x)=Q T x+θ
其中Q为线性回归模型的一次项参数,θ为常数项参数;将该线性回归模型嵌入到半监督学习框架内,则新的框架为:Where Q is the first-order item parameter of the linear regression model, and θ is the constant item parameter; if the linear regression model is embedded in the semi-supervised learning framework, the new framework is:
其中,X表示每个图像的特征描述符,α和γ作为非负的正则参数,控制着模型的复杂度和经验损失函数之间的平衡;Among them, X represents the feature descriptor of each image, and α and γ are used as non-negative regular parameters to control the balance between the complexity of the model and the empirical loss function;
根据上述公式的凸属性,可以分别计算各个参数的偏导数而求得F的最优解,首先用J表示该半监督学习框架,设F和Q的偏导数等于0得到:According to the convex property of the above formula, the partial derivatives of each parameter can be calculated separately to obtain the optimal solution of F. First, J is used to represent the semi-supervised learning framework, and the partial derivatives of F and Q are set equal to 0 to obtain:
将第二个等式求得的Q代入到第一等式中,可以求得F的结果为:Substituting the Q obtained from the second equation into the first equation, the result of F can be obtained as follows:
F=(K-αXM)-1YF=(K-αXM) -1 Y
其中,中间结果K表示L+(λ+α)I,中间结果M表示(αXTX+γI)-1αXT,此时将求得F代入求Q的偏导公式等式中可以得到Q为:Among them, the intermediate result K represents L+(λ+α)I, and the intermediate result M represents (αX T X+γI) -1 αX T . At this time, substituting the obtained F into the partial derivative formula equation for Q can obtain Q as :
Q=(αXTX+γI)-1αXTF=MFQ=(αX T X+γI) -1 αX T F=MF
得到Q即为线性回归模型的参数,当有新数据进入时,无须将新数据构建超图,可以直接根据公式g(x)=QTx+θ来求得新数据的标签信息。The obtained Q is the parameter of the linear regression model. When new data enters, there is no need to construct a hypergraph of the new data, and the label information of the new data can be obtained directly according to the formula g(x)=Q T x+θ.
本发明使用原始图像数据构建一个超图,并使用半监督学习框架来预测未标注图像的标签,由于超图本身保存了比普通图更丰富的信息,而半监督学习框架不但考虑了全局数据的属性信息,同时也考虑到了标注数据和未标注数据之间的局部信息,因而本发明在室内场景分类方面取得较好的效果。The present invention uses the original image data to construct a hypergraph, and uses a semi-supervised learning framework to predict the labels of unlabeled images. Since the hypergraph itself saves more information than ordinary images, the semi-supervised learning framework not only considers the global data At the same time, the local information between the labeled data and the unlabeled data is also considered, so the present invention achieves better results in indoor scene classification.
本发明具有的有益效果是:使用包含语义信息的图像描述符和半监督的学习框架来对室内场景进行分类,能有效的提供室内场景分类的精度。同时训练出的线性回归模型,能够加速新数据标签的预测。本发明为机器人路径选择以及室内监控领域提供了新的技术方法,有效提高了使用室内场景领域技术的效率。The beneficial effect of the present invention is that: using the image descriptor containing semantic information and a semi-supervised learning framework to classify indoor scenes can effectively improve the accuracy of indoor scene classification. At the same time, the linear regression model trained can accelerate the prediction of new data labels. The invention provides a new technical method for the field of robot path selection and indoor monitoring, and effectively improves the efficiency of using technology in the field of indoor scenes.
附图说明Description of drawings
图1是本发明实施例的流程图。Fig. 1 is a flowchart of an embodiment of the present invention.
图2是本发明与其他分类方法的分类效果比较示意图。在图2中,横坐标为训练数据的标注比例(%),纵坐标为分类准确率(%);曲线a表示本发明超图学习方法,曲线b表示普通图方法,曲线c表示k近邻方法,曲线d表示拉普拉斯支持向量机,曲线e表示渐进直推式支持向量机,曲线f表示普通支持向量机。Fig. 2 is a schematic diagram of the classification effect comparison between the present invention and other classification methods. In Figure 2, the abscissa is the labeling ratio (%) of the training data, and the ordinate is the classification accuracy (%); curve a represents the hypergraph learning method of the present invention, curve b represents the ordinary graph method, and curve c represents the k-nearest neighbor method , curve d represents Laplacian support vector machine, curve e represents asymptotic direct push type support vector machine, and curve f represents ordinary support vector machine.
图3是本发明使用的线性回归模型预测图像标签结果示意图。在图3中,横坐标为训练数据的标注比例(%),纵坐标为分类准确率(%);曲线a表示10%训练数据生成的参数Q,曲线b表示20%训练数据生成的参数Q,曲线c表示30%训练数据生成的参数Q,曲线d表示40%训练数据生成的参数Q,曲线e表示50%训练数据生成的参数Q。Fig. 3 is a schematic diagram of the results of predicting image labels by the linear regression model used in the present invention. In Figure 3, the abscissa is the labeling ratio (%) of the training data, and the ordinate is the classification accuracy (%); curve a represents the parameter Q generated by 10% of the training data, and curve b represents the parameter Q generated by 20% of the training data , curve c represents the parameter Q generated by 30% of the training data, curve d represents the parameter Q generated by 40% of the training data, and curve e represents the parameter Q generated by 50% of the training data.
具体实施方式detailed description
本发明提出的基于超图学习的室内场景分类方法,根据图1介绍本发明的具体技术方案和实施步骤:The indoor scene classification method based on hypergraph learning proposed by the present invention introduces specific technical solutions and implementation steps of the present invention according to Fig. 1:
步骤一:使用近百个目标检测子从图像中抽取出目标,再根据形成的目标描述符组成的一个超级描述符,作为图像的特征描述符;Step 1: Use nearly a hundred target detectors to extract the target from the image, and then form a super descriptor based on the formed target descriptor as the feature descriptor of the image;
步骤二:使用K近邻方法对所有生成的图像描述符构建超图,并基于生成的超图计算出其拉普拉斯矩阵,进而构建出半监督学习框架;Step 2: Use the K nearest neighbor method to construct a hypergraph for all generated image descriptors, and calculate its Laplacian matrix based on the generated hypergraph, and then construct a semi-supervised learning framework;
步骤三:构建一个线性回归模型,并将该线性回归模型加入到半监督学习框架内;Step 3: Construct a linear regression model and add the linear regression model to the semi-supervised learning framework;
步骤四:依据步骤三中所构建的半监督学习框架,并结合步骤一所提取的图像的特征描述符,对部分图像描述符进行标注,使得该半监督学习框能够自动迭代地预测出未标注图像的标签,从而完成图像分类。同时,步骤三中的线性回归的模型在自动迭代过程中被初始化;Step 4: According to the semi-supervised learning framework constructed in step 3, combined with the feature descriptors of the image extracted in step 1, mark some image descriptors, so that the semi-supervised learning frame can automatically and iteratively predict unlabeled Image labels to complete image classification. At the same time, the linear regression model in step 3 is initialized in the automatic iteration process;
步骤五:依据步骤三中的线性回归的模型,并结合步骤一所提取的图像的特征描述符,可以对新加入的数据直接进行图像分类,而无须再次构建超图。Step 5: According to the linear regression model in step 3, combined with the feature descriptor of the image extracted in step 1, the newly added data can be directly classified into images without constructing a hypergraph again.
关于步骤二中提到的构建半监督学习框架的具体方法,首先根据提取出的图像的特征描述符构建超图,并计算其相关矩阵H:Regarding the specific method of constructing a semi-supervised learning framework mentioned in step 2, first construct a hypergraph based on the feature descriptors of the extracted images, and calculate its correlation matrix H:
其中υ表示超图的结点,e表示超图的边。进而可以计算超图中每条边的权重w(e),每个节点的度数d(υ)和每条超边的度数δ(e),同时可使用w(e),d(υ)和δ(e)作为对角元素构造其相关的对角矩阵W,Dυ和De,根据这三个对角矩阵及相关矩阵,可以计算得到中间结果Θ为:Among them, υ represents the node of the hypergraph, and e represents the edge of the hypergraph. In turn, the weight w(e) of each edge in the hypergraph, the degree d(υ) of each node and the degree δ(e) of each hyperedge can be calculated, and w(e), d(υ) and δ(e) is used as a diagonal element to construct its related diagonal matrix W, D υ and D e . According to these three diagonal matrices and correlation matrix, the intermediate result Θ can be calculated as:
使用单位矩阵I减去Θ则可得:Subtracting Θ using the identity matrix I gives:
L=I-ΘL=I-Θ
计算结果L即表示该超图的拉普拉斯矩阵。基于该拉普拉斯矩阵可以构建出半监督学习框架的正则化项:The calculation result L is the Laplacian matrix of the hypergraph. The regularization term of the semi-supervised learning framework can be constructed based on the Laplacian matrix:
Ω(f)=fTLfΩ(f)=f T Lf
其中f表示需要预测图像的标签向量,fT表示向量f的转置向量。进而构造出半监督框架,其公式如下:where f represents the label vector of the image to be predicted, and f T represents the transposed vector of vector f. Then construct a semi-supervised framework, the formula is as follows:
其中Y表示对图像进行标注的矩阵,tr表示计算矩阵的迹,λ参数是一个非负的数,控制着模型复杂度和经验损失函数之间的平衡。通过计算该公式,可以得到全部数据的预测标签F。Where Y represents the matrix that labels the image, tr represents the trace of the calculation matrix, and the λ parameter is a non-negative number that controls the balance between the model complexity and the empirical loss function. By calculating this formula, the predicted label F of all data can be obtained.
步骤三中提到的线性回归的模型,其作用是对新加入的数据,能够使用该线性回归模型直接进行图像分类,而无须再次构建超图。该线性回归的模型公式如下:The function of the linear regression model mentioned in step 3 is to use the linear regression model to directly classify images for the newly added data without constructing a hypergraph again. The model formula for this linear regression is as follows:
g(x)=QTx+θg(x)=Q T x+θ
其中Q为线性回归模型的一次项参数,θ为常数项参数。将这个线性模型嵌入到半监督学习框架内,则新的框架为:Where Q is the first-order parameter of the linear regression model, and θ is the constant parameter. Embedding this linear model into a semi-supervised learning framework, the new framework is:
其中,X表示每个图像的特征描述符,α和γ作为非负的正则参数控制着模型的复杂度和经验损失函数之间的平衡。Among them, X represents the feature descriptor of each image, and α and γ are used as non-negative regular parameters to control the balance between the complexity of the model and the empirical loss function.
根据上述公式的凸属性,可以分别计算各个参数的偏导数而求得F的最优解,首先用J表示该半监督学习框架,设F和Q的偏导数等于0得到:According to the convex property of the above formula, the partial derivatives of each parameter can be calculated separately to obtain the optimal solution of F. First, J is used to represent the semi-supervised learning framework, and the partial derivatives of F and Q are set equal to 0 to obtain:
将第二个等式求得的Q代入到第一等式中,可以求得F的结果为:Substituting the Q obtained from the second equation into the first equation, the result of F can be obtained as follows:
F=(K-αXM)-1YF=(K-αXM) -1 Y
其中,中间结果K表示L+(λ+α)I,中间结果M表示(αXTX+γI)-1αXT,此时将求得F代入求Q的偏导公式等式中可以得到Q为:Among them, the intermediate result K represents L+(λ+α)I, and the intermediate result M represents (αX T X+γI) -1 αX T . At this time, substituting the obtained F into the partial derivative formula equation for Q can obtain Q as :
Q=(αXTX+γI)-1αXTF=MFQ=(αX T X+γI) -1 αX T F=MF
得到Q即为线性回归模型的参数,当有新数据进入时,无须将新数据构建超图,可以直接根据公式g(x)=QTx+θ来求得新数据的标签信息。The obtained Q is the parameter of the linear regression model. When new data enters, there is no need to construct a hypergraph of the new data, and the label information of the new data can be obtained directly according to the formula g(x)=Q T x+θ.
Claims (2)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201310566625.XA CN103605984B (en) | 2013-11-14 | 2013-11-14 | Indoor scene sorting technique based on hypergraph study |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201310566625.XA CN103605984B (en) | 2013-11-14 | 2013-11-14 | Indoor scene sorting technique based on hypergraph study |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN103605984A CN103605984A (en) | 2014-02-26 |
| CN103605984B true CN103605984B (en) | 2016-08-24 |
Family
ID=50124204
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201310566625.XA Expired - Fee Related CN103605984B (en) | 2013-11-14 | 2013-11-14 | Indoor scene sorting technique based on hypergraph study |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN103605984B (en) |
Families Citing this family (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN105426923A (en) * | 2015-12-14 | 2016-03-23 | 北京科技大学 | Semi-supervised classification method and system |
| CN107423547A (en) * | 2017-04-19 | 2017-12-01 | 江南大学 | Increment type location algorithm based on the semi-supervised learning machine that transfinites |
| CN109300549B (en) * | 2018-10-09 | 2020-03-17 | 天津科技大学 | Food-disease association prediction method based on disease weighting and food category constraint |
| CN109492691A (en) * | 2018-11-07 | 2019-03-19 | 南京信息工程大学 | A kind of hypergraph convolutional network model and its semisupervised classification method |
| CN111307798B (en) * | 2018-12-11 | 2023-03-17 | 成都智叟智能科技有限公司 | Article checking method adopting multiple acquisition technologies |
| CN110097080B (en) * | 2019-03-29 | 2021-04-13 | 广州思德医疗科技有限公司 | Construction method and device of classification label |
| CN110097112B (en) * | 2019-04-26 | 2021-03-26 | 大连理工大学 | Graph learning model based on reconstruction graph |
| CN110363236B (en) * | 2019-06-29 | 2020-06-19 | 河南大学 | Hyperspectral Image Extreme Learning Machine Clustering Method Based on Space Spectrum Joint Hypergraph Embedding |
| CN111259184B (en) * | 2020-02-27 | 2022-03-08 | 厦门大学 | Image automatic labeling system and method for new retail |
| CN113963322B (en) * | 2021-10-29 | 2023-08-25 | 北京百度网讯科技有限公司 | A detection model training method, device and electronic equipment |
| CN114463602B (en) * | 2022-04-12 | 2022-07-08 | 北京云恒科技研究院有限公司 | Target identification data processing method based on big data |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6598043B1 (en) * | 1999-10-04 | 2003-07-22 | Jarg Corporation | Classification of information sources using graph structures |
| CN103020120A (en) * | 2012-11-16 | 2013-04-03 | 南京理工大学 | Hypergraph-based mixed image summary generating method |
-
2013
- 2013-11-14 CN CN201310566625.XA patent/CN103605984B/en not_active Expired - Fee Related
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6598043B1 (en) * | 1999-10-04 | 2003-07-22 | Jarg Corporation | Classification of information sources using graph structures |
| CN103020120A (en) * | 2012-11-16 | 2013-04-03 | 南京理工大学 | Hypergraph-based mixed image summary generating method |
Non-Patent Citations (1)
| Title |
|---|
| 基于核方法的半监督超图顶点分类算法分析;贾志洋等;《云南师范大学学报》;20130131;46-49 * |
Also Published As
| Publication number | Publication date |
|---|---|
| CN103605984A (en) | 2014-02-26 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN103605984B (en) | Indoor scene sorting technique based on hypergraph study | |
| Vijayakumar et al. | Yolo-based object detection models: A review and its applications | |
| CN104217225B (en) | A kind of sensation target detection and mask method | |
| CN110569901B (en) | Channel selection-based countermeasure elimination weak supervision target detection method | |
| Liu et al. | Multimodal video classification with stacked contractive autoencoders | |
| Bazzani et al. | Self-taught object localization with deep networks | |
| Wen et al. | A rapid learning algorithm for vehicle classification | |
| CN104809187B (en) | A kind of indoor scene semanteme marking method based on RGB D data | |
| CN105005794B (en) | Merge the image pixel semanteme marking method of more granularity contextual informations | |
| Kosmopoulos et al. | Bayesian filter based behavior recognition in workflows allowing for user feedback | |
| CN108764308A (en) | Pedestrian re-identification method based on convolution cycle network | |
| CN107292246A (en) | Infrared human body target identification method based on HOG PCA and transfer learning | |
| CN112052818B (en) | Method, system and storage medium for detecting pedestrians without supervision domain adaptation | |
| Vo et al. | Active learning strategies for weakly-supervised object detection | |
| Tian et al. | Video object detection for tractability with deep learning method | |
| Chen et al. | Classification of hyperspectral data using a multi-channel convolutional neural network | |
| CN114187590A (en) | Method and system for identifying target fruits under homochromatic system background | |
| Li et al. | Multiple kernel-based multi-instance learning algorithm for image classification | |
| Li | Hierarchical edge aware learning for 3d point cloud | |
| Kumar et al. | Robust Vehicle Detection Based on Improved You Look Only Once. | |
| Chen et al. | Learning to locate for fine-grained image recognition | |
| Gong et al. | Semi-supervised classification with pairwise constraints | |
| CN104346456B (en) | The digital picture multi-semantic meaning mask method measured based on spatial dependence | |
| Fazry et al. | Change detection of high-resolution remote sensing images through adaptive focal modulation on hierarchical feature maps | |
| Li | Edge aware learning for 3D point cloud |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| C14 | Grant of patent or utility model | ||
| GR01 | Patent grant | ||
| CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20160824 |
|
| CF01 | Termination of patent right due to non-payment of annual fee |