Detailed Description
Embodiments of the present application will be described below with reference to the accompanying drawings in the embodiments of the present application. The terminology used in the description of the embodiments of the application herein is for the purpose of describing particular embodiments of the application only and is not intended to be limiting of the application.
As one of ordinary skill in the art can know, with the development of technology and the appearance of new scenes, the technical scheme provided by the embodiment of the application is also applicable to similar technical problems.
In the present application, "at least one" means one or more, and "a plurality" means two or more. "and/or" describes an association of associated objects, meaning that there may be three relationships, e.g., A and/or B, and that there may be A alone, while A and B are present, and B alone, where A, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of the following" or similar expressions thereof, means any combination of these items, including any combination of single or plural items. The terms first, second and the like in the description and in the claims and in the above-described figures, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances and are merely illustrative of the manner in which embodiments of the application have been described in connection with the description of the objects having the same attributes. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of elements is not necessarily limited to those elements, but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
1. Big language model (large language model, LLM)
Large language models refer to deep learning models trained using large amounts of text data that can generate natural language text or understand the meaning of language text. The large language model can process various natural language tasks, such as text classification, question-answering, dialogue and the like, and is an important path to artificial intelligence.
Specifically, the large language model is a technology which has been raised in recent years, and because the large language model is subjected to fine data engineering and training procedures, a large amount of existing natural language processing knowledge has been learned inside parameters thereof. Such knowledge has been used to replace people to do many linguistic tasks, such as having a large language model write code, or having a large language model do text summarization.
2. Picture model (graph model)
A graph model refers to a graph made up of nodes and edges to describe a system.
In embodiments of the application, the graph model may include, in particular, a directed acyclic graph (DIRECTED ACYCLIC GRAPH, DAG) and a directed acyclic graph.
3. Directed acyclic graph (DIRECTED ACYCLIC GRAPH, DAG) decomposition
DAG decomposition is a task decomposition method that performs task decomposition based on the concept of directed acyclic graphs. A DAG is a directed graph in which there is no ring, i.e. starting from a vertex, it is not possible to return to the vertex through a series of edges. The DAG is a graph structure well suited for representing and solving a number of problems with precedence or conditional dependencies, such as task scheduling, shortest path problems in algorithms, etc.
In DAG decomposition, a complex task is decomposed into multiple independent sub-tasks, each of which can be executed independently. This decomposition process first requires that the degree of invasiveness (the number of directed edges entering the vertex) of each point in the graph be clarified, then the vertex with the degree of invasiveness of 0 is divided into the top layer or the first layer, then the vertex with the degree of invasiveness of 0 is searched in the subgraph from which the vertex of the previous layer is removed and divided into the next layer, and so on until all the vertices are allocated to different layers. Each layer can be independently executed, so that the parallel processing of tasks is realized.
4. Knowledge graph
The knowledge graph is a graph-based data structure, is a representation of a semantic network, and is used for displaying entities and interrelationships among the entities. The knowledge graph comprises nodes and edges, wherein the nodes are used for representing entities such as people, place names, companies and the like, the knowledge graph can also correspond to attributes of the entities, and the edges represent the relationships among the entities. In the knowledge graph, data may be organized by the multi-tuple data. The multi-tuple data may include triplet data, quadruples, quintuples, or the like. The triple data comprises a node-edge-node, a node-attribute name-attribute value and the like.
The embodiment of the application provides a cloud service-based design code generation method, which can conveniently and efficiently generate design codes for users in an intelligent manner so as to efficiently realize design tasks such as a three-dimensional model, a user interface and the like and realize intelligent industrial design.
The embodiment of the application can be applied to a cloud management platform.
The cloud management platform is used for managing an infrastructure for providing cloud services, the infrastructure comprises a plurality of areas, each area comprises at least one cloud data center, and the cloud services are operated on at least one server of the at least one cloud data center in the plurality of areas.
An exemplary description of a cloud data center is provided below in connection with an architecture diagram shown in fig. 1.
In fig. 1, the cloud management platform performs information interaction with one or more servers (such as server 1 and server 2 in fig. 1) through an internal network of the data center. The server includes a hardware layer and a software layer, the hardware layer includes hardware configured by the server, where the PCI device may be, for example, a network card, a graphics processor (graphics processing unit, GPU), an offload card, or a device that may be plugged into a peripheral device interconnect standard (PERIPHERAL COMPONENT INTERCONNECT, PCI) or a peripheral device high-speed connection standard (PERIPHERAL COMPONENT INTERCONNECT EXPRESS, PCIe) slot of the server, the software layer includes an operating system (an operating system of a relative virtual machine may be referred to as a host machine operating system) installed and running on the server, and a virtual machine manager (also referred to as a hypervisor) is disposed in the host machine operating system, where the virtual machine manager functions to implement computing virtualization, network virtualization, and storage virtualization of the virtual machine, and is responsible for managing the virtual machine. Wherein a virtual machine (virtual machine) refers to a complete computer system that runs in a completely isolated environment with complete hardware system functionality through software emulation. In the system architecture shown in fig. 1, a plurality of servers are disposed in an infrastructure, and the servers may be used to run virtual machines, and specifications of the virtual machines may be the same or different. Among other things, virtual machines may also be referred to as cloud servers (elastic compute service, ECS), elastic instances, etc., and different cloud service providers may have different names.
In an example of the embodiment of the present application, the cloud management platform may be a public cloud platform, and at this time, a person with cloud resource development capability or a cloud service provider such as a software developer may provide cloud services for a user, where the user obtains cloud services through the internet, but does not have cloud computing resources. In other embodiments of the present application, the cloud management platform may be a private cloud platform or a hybrid cloud platform, which the present application is not limited to.
Specifically, in the example shown in fig. 1, the cloud management platform may provide an access interface (such as an interface or an application programming interface (application programming interface, API)), and the user of the cloud management platform and the cloud service provider may operate the client remote access interface to register a cloud account number and a password with the cloud management platform, and log into the cloud management platform after the cloud account number and the password are successfully authenticated by the cloud management platform, thereby creating, managing, logging in, and operating the virtual machine in the cloud data center. It is seen that an enterprise, organization, or individual may purchase or lease cloud services through a cloud management platform, such that specified tasks may be performed through cloud services provided by cloud resources of the cloud management platform.
For example, as shown in fig. 2, cloud services provided by the cloud management platform may include design code generation services, and in addition, in some examples, design solution generation services.
For example, in some examples shown in fig. 2, a user may send information of a design intent to a cloud management platform to instruct the cloud management platform to perform a design code generation task, thereby intelligently generating a target design code required for the design task based on a knowledge graph and a large language model and feeding back to the user, so that the user performs a design task on the target design object (e.g., a modeling design task on a three-dimensional model, an assembly task, or a design task on a user interface, etc.) according to the target design code, and obtains the target design object as a design result.
The target design object may also be referred to as a target design instance, which is a design object generated in a design task, and a target design object may also be referred to as a target design instance, for example, a three-dimensional model for completing a modeling design, a three-dimensional model for completing an assembly design, or a user interface for completing a planar design. It can be seen that the specific types of the target design objects can be various and can be determined according to the actual application scenario.
Or in some examples shown in fig. 2, the user may send information of the design intent to the cloud management platform to instruct the cloud management platform to perform the design code generation task to intelligently generate the target design code required for the design task based on the knowledge graph and the large language model. The design code generation service may then pass the target design code to a design solution generation service of the cloud management platform. Then, the design solution generating service may call the target design code to perform a design task on the target design object (for example, a modeling design task on a three-dimensional model, an assembly task, or a design task on a user interface) to obtain the target design object as a design result (for example, a designed three-dimensional model or a user interface), and then output the target design object from the cloud management platform to the user.
It should be noted that the service in fig. 2 is only an example of a service provided by the cloud management platform, and is not limited thereto.
In other examples provided by the application, the design code generation service and the design scheme generation service can be integrally deployed on the cloud management platform, and the functional division manner of each service can be different from that of the service shown in fig. 2, and the deployment manner of each service can be different. Each service may be provided independently, may be embedded in other services, or may be deployed after a plurality of services are combined, which is not limited in this aspect of the application.
Based on the cloud management platform, referring to the system architecture shown in fig. 2, as shown in fig. 3, the method of the embodiment of the present application may include one or more of the following aspects:
The method comprises the steps of knowledge graph construction, large language model enhancement and constraint facing to a design code generation task, design code generation based on the knowledge graph and the large language model, design by calling the generated design code, and design result optimization.
The above aspects are each described below by way of example.
1. Knowledge graph construction
Knowledge graph construction in embodiments of the present application may include one or more of the following aspects:
The method comprises the steps of constructing a knowledge layer (schema) of a knowledge graph, mining knowledge of design materials, joint characterization of design knowledge from multi-modal design materials and knowledge fusion.
Several aspects that may be involved in knowledge-graph construction are described below by way of example.
1. Construction of a data model (schema) of a knowledge graph
Knowledge maps may be considered structured semantic knowledge bases and may include data model schemas and instance layers. The schema may also be referred to as a data schema, and may be considered as a data model of a specified domain (e.g., a design domain in the embodiment of the present application), including concept types that are significant in the specified domain and attributes of the concept types. The schema of the specified field is mainly expressed by a type (type) and a property (property). The instance layer includes instance data, for example, instance data corresponding to an entity 'name' in the knowledge graph includes Zhang three, lifour, and the like.
In the embodiment of the present application, the specific manner of constructing the schema is not limited herein, and an existing or later developed schema construction manner may be adopted.
For example, the schema may be constructed based on expert experience. For example, in the example shown in fig. 4, for the related application field (i.e., the design field in the embodiment of the present application), the expert predicts the schema from top to bottom according to the design object of the design field, summarizes the schema from bottom to top, and performs information fusion to construct the schema. For example, the preset text of the design field can be collected and arranged, the preset text is usually small in data size so that experts can conveniently carry out summarization and information extraction, and then the characteristics of the entities, the relations and the attributes in the preset text are summarized, so that the relevant data frames of the design field are extracted, and a determined schema is formed.
After the schema is constructed, information extraction (e.g., extracting information such as entities, relationships, and attributes meeting the requirements of the schema) can be performed from the design material in a targeted manner according to the constructed schema, so as to construct a knowledge graph.
2. Knowledge mining of design materials
In the embodiment of the present application, the types of the design materials may be one or more, in other words, the design materials may be single-mode design materials or multi-mode design materials.
By way of example, the design material may include one or more of a first text, a first design object, a first image.
Knowledge mining of design materials of any of the above-described modalities is described exemplarily below, and possible joint characterization schemes between design materials of multiple modalities may be described exemplarily.
(1) Mining of explicit knowledge of document bearers
In the embodiment of the application, the design knowledge carried by the document can be regarded as explicit knowledge.
While unstructured documents, semi-structured documents, and structured documents may be included in explicit knowledge of the documents.
In one example, the structured document may include a structured form such as a block diagram, a flow diagram, or the like, a graph structure, and/or a table. Because the data structure in the structured document is clear, structured information such as entities, relationships, attributes and the like can be conveniently extracted from the structured document as design knowledge extracted from the structured document.
An exemplary description of knowledge mining of a first image and a first text, which may be included in a unstructured document or a semi-structured document, respectively, is provided below.
In one example, the unstructured document or the semi-structured document may include, but is not limited to, a first image.
The first image may include one or more images, and the data type of the first image may be in a variety of situations, without limitation. Illustratively, the first image may be in a bitmap format (e.g., may be in a Bitmap (BMP), portable network graphics (portable network graphics, PNG), joint photographic experts group (joint photographic experts group, JPEG), etc.), vector graphics (vector graphics), etc.
An exemplary manner of obtaining design knowledge from the first image is described below.
In some embodiments, if the design material includes a first image, the manner of obtaining the design knowledge corresponding to the first image specifically includes:
Design knowledge in the first image is identified by object detection and/or semantic segmentation, the design knowledge in the first image comprising one or more of entities in the first image, relationships between entities in the first image, attributes of the entities in the first image.
In the embodiment of the present application, referring to the example shown in fig. 4, the first image may be identified by means of object detection and/or semantic segmentation in Computer Vision (CV). The entities in the first image may include one or more targets detected in the first image, and the relationship between the entities may be determined according to the relative position between the entities, etc., and further, the attribute of the entities in the first image may include one or more of position, size (e.g., may be described by a bounding box such as a minimum bounding rectangle of the entities), direction, etc.
For example, information such as the category and location of the entity in the first image may be identified by object detection. The specific manner of target detection may be varied and is not limited herein. Illustratively, target detection may be achieved by a machine learning model (e.g., convolutional neural network (convolutional neural networks, CNN), support vector machine (support vector machine, SVM), etc.) or by conventional target detection methods (e.g., scale-INVARIANT FEATURE TRANSFORM, SIFT and random sample consensus algorithm (random sample consensus, RANSAC)), which enable target detection.
In addition, the category to which each pixel in the first image belongs may also be identified by semantic segmentation, in other words, the semantic segmentation may segment the first image into one or more regions and obtain semantic categories to which each of the one or more regions belongs. It can be seen that, according to the semantic category identified by the semantic segmentation and the segmented region, the entity included in the first image may be identified, the relationship between the entities may be determined according to the relative position between the entities, etc., and the attribute of the entity may be determined according to the position of the entity, the information of the region, etc.
It can be seen that, in the embodiment of the present application, the design knowledge such as the entity in the first image, the relationship between the entities and/or the attribute of the entity may be identified through the object detection and/or the semantic segmentation.
In one example, the unstructured document or the semi-structured document may include, but is not limited to, a first text.
An exemplary manner of obtaining design knowledge from the first text is described below.
In the embodiment of the application, the named entity recognition can be performed on the first text so as to recognize the named entity in the first text. Then, information such as relationships between named entities, attributes of named entities and the like can be identified from the first text, and the entities in the first text, the relationships between the entities, the attributes of the entities and the like can be obtained as design knowledge of the first text.
The named entity identification method, the relationship between named entities and the attribute of named entities can be various.
For example, in some examples, the large language model to be trimmed may be trimmed so that the trimmed large language model can identify design information of the first text.
Specifically, in some embodiments, the method further comprises:
Performing fine tuning on the large language model to be fine-tuned according to the preset text and the label of the preset text to obtain the large language model for identifying the design information of the text after fine tuning, wherein the label of the preset text comprises one or more of entities in the preset text, relations among the entities in the preset text and attributes of the entities in the preset text;
And extracting information from the first text through a large language model for identifying the design information of the text, and obtaining the design knowledge corresponding to the first text.
Specifically, as in the example shown in fig. 4, small sample data for full supervised learning in the design field may be included in the preset text, and the small sample data may correspond to a tag for describing an entity of the small sample data.
Therefore, the first fine adjustment of the large language model to be fine-adjusted can be realized according to the small sample data and the label thereof, and the large language model after the first fine adjustment can identify the named entity in the text of the design field.
Weakly supervised data may then be acquired, in which there may be a portion of data corresponding to a tag that is used to describe information of relationships and/or attributes of named entities in the portion of data. Thus, the second fine-tuning can be performed on the large language model after the first fine-tuning according to the labels corresponding to the weak supervision data and part of the data in the weak supervision data. Thus, the large language model after the second fine tuning can realize extraction of information such as entities in the text, relationships of the entities, attributes of the entities and the like. The large language model after the second fine tuning may be regarded as a large language model for design information recognition of the text, so that when the knowledge graph is constructed, the design information recognition may be performed on the first text through the large language model for design information recognition of the text to obtain design knowledge in the first text.
It will be appreciated that in the embodiment of the present application, the large language model for the design information identification of the text is different from the fine tuning operation involved in the large language model used in the process of generating the design code, and thus, the large language model for the design information identification of the text may be different from the large language model used in the process of generating the design code. For example, the large language model used in the process of generating the design code may be obtained by further trimming the large language model for the design information identification of the text, or may be obtained by trimming a large language model to be trimmed in other cases.
Or in other examples, other existing or later developed recognition means may be employed to identify the design information in the first text.
(2) Mining of implicit knowledge without document bearers
In the embodiment of the application, the design knowledge without document bearing can be regarded as the implicit knowledge.
In an actual design scenario, there are often a large number of historical design objects. The history design object may be regarded as a history design instance, that is, a design object obtained by performing design tasks in history. For example, in a three-dimensional design scenario, the historian object (i.e., historian instance) may be a three-dimensional model of a historian design, while in a user interface design scenario, the historian object may be considered a historian-derived user interface.
The large number of historical design objects also typically includes a large amount of design knowledge, which may include personalized design knowledge in a personalized design scenario, and may also include common design knowledge derived from designs based on common design specifications and/or design constraints, and the like. In the traditional knowledge graph construction, knowledge extraction is generally only performed from text, but knowledge extraction is difficult to realize from other forms of materials.
In the embodiment of the application, the implicit knowledge can be mined from the historical design object to enrich the knowledge in the knowledge graph, so that the full mining of the design knowledge in the design field is realized.
An exemplary method of implicit knowledge mining from historical design objects is described below using a first design object of the historical design objects as an example.
Specifically, in some embodiments, the design material includes a first design object, the method comprising:
obtaining, from the first design object, a first geometric feature of a first part of the one or more parts comprised by the first design object, the first geometric feature being used to describe a feature associated with a geometric element in the first part;
and obtaining design knowledge corresponding to the first design object according to the first geometric characteristics.
The first design object may also be referred to as a first design instance, and may be considered as a historical design object, which is a design object generated in a historical design task, for example, a three-dimensional model that can complete a build design, a three-dimensional model that completes an assembly design, or a user interface that completes a planar design, or the like.
In the embodiment of the application, the first geometric feature of the first component in the first design object can be obtained.
Wherein the first geometric feature is used to describe a feature associated with a geometric element in the first component. The geometric element may refer to a basic, indivisible object or entity that exists in space. Illustratively, the geometric elements may include primitives in a two-dimensional image, or may include geometric elements in a three-dimensional image. Wherein the primitives in the two-dimensional image may include one or more of points, lines, planes, etc., and the geometric elements in the three-dimensional image may include spheres, basic geometries, etc.
The first geometric feature may include a feature of the geometric element itself in the first component or a feature of the structure formed by the geometric element in the first component, and the first geometric feature may be described by related parameter data of the geometric element or may be described by semantic form.
Illustratively, the first geometric feature includes one or more of the following:
spatial information of geometric elements in the first component, semantic information describing the first component and/or the geometric elements in the first component, structural features of the first component, gradient features of the first component.
Wherein the spatial information of the geometric elements in the first component may include one or more of a position, an orientation, a size, a scaling, etc. of the geometric elements. The semantic information describing the first part and/or the geometric elements in the first part may describe semantics of the first part and/or the geometric elements in the first part, e.g., names, morphologies, meanings, etc. describing the first part and/or the geometric elements in the first part. The structural features of the first component may then comprise the contours of the first component and/or characteristics of the internal structure of the first component, for example, may comprise shape information such as symmetry of the first component, relationships (e.g., parallel, perpendicular, intersecting) between geometric elements in the first component, etc. The gradient information of the first component may also be referred to as sharp information of the first component, described in terms of the magnitude of the normal gradient of the first component.
It can be seen that the first geometric feature of the first component may describe the first component from the most basic geometric element of the first component to one or more layers of the structure formed by the geometric elements, etc.
There are various ways to identify the first geometric feature of the first component, for example, the first geometric feature of the first component may be identified by a machine learning model such as convolutional neural network (convolutional neural networks, CNN), or may be identified by an algorithm such as scale-invariant feature transform (SIFT), accelerated robust features (speeded up robust features, SURF), features From Accelerated Segmentation Test (FAST), and the like.
There may be a plurality of situations for the first component.
For example, when the first design object is simpler, for example, includes only one component or several components simply spliced, the one component or the several components simply spliced may be taken as the first component.
Or the first design object may be decomposed to obtain some or all of the components in the first design object as the first components.
Specifically, in some embodiments, obtaining, from a first design object, a first geometric feature of a first component of one or more components included in the first design object, includes:
Decomposing the first design object to obtain one or more levels of components in the first design object;
Obtaining a target level component from one or more levels of components in the first design object as a first component;
A first geometric feature of the first component is acquired.
In the embodiment of the present application, referring to the example shown in fig. 4, the first design object may be decomposed from top to bottom to obtain one or more levels of components of the first design object, where the lowest level component of the one or more levels of components may be a component, such as a bolt, a nut, or the like, so as to decompose the first design object into components that are relatively general in the design field and have commonality and are convenient to describe using design knowledge in the design field.
After decomposing the first design object, the topology level of each level of the decomposed components can be described by a DAG or the like, the DAG can be optimized by a graph neural network (graph neural network, GNN) or the like, and one or more levels of components in the first design object are obtained after the optimization.
After obtaining the one or more levels of components, the component at the lowest level of the one or more levels may be taken as the component of the target level, thereby obtaining the component at the lowest level (i.e., the leaf node in the DAG) as the first component. Specifically, referring to the example shown in fig. 4, in the DAG, a component corresponding to a leaf node may be obtained as the first component. It will be appreciated that the components corresponding to the leaf nodes may be assembled from bottom to top to obtain the first design object, so that the components corresponding to the leaf nodes may describe features of the first design object, and the components corresponding to the leaf nodes are typically in the form of more general parts in the design field, so that the components corresponding to the leaf nodes may be used as the first components to obtain the first geometric features of the components corresponding to the leaf nodes to obtain the design knowledge in the first design object.
After the first geometric feature of the first component is obtained, the entity may be obtained according to the first component, the attribute of the first component may be obtained according to the first geometric feature of the first component, and information such as a relationship between the first components and/or a relationship between geometric elements in the first component or between geometric bodies in the first component may be obtained to construct a knowledge graph.
In addition, in some embodiments, since the single first design object is an independent design object and may include some personalized or unsuitable designs, more commonly used target first geometric features may be screened from the plurality of first design objects to be used as the design knowledge of the knowledge graph, so as to ensure the reliability of the design knowledge in the knowledge graph.
Specifically, in some embodiments, the number of the first design objects is a plurality, and according to the first geometric feature, obtaining the design knowledge corresponding to the first design objects includes:
Determining target first geometric features from the first geometric features corresponding to the first design objects according to the co-occurrence relation of the first geometric features corresponding to the first design objects;
and obtaining design knowledge corresponding to the first design object according to the target first geometric feature.
Co-occurrence relationship refers to the frequency or probability that two things appear in the same context.
In an embodiment of the present application, the co-occurrence relationship of a certain first geometric feature may be a frequency or probability of occurrence of the first geometric feature and the first design object in the plurality of first design objects. In some examples, the co-occurrence relationship of a certain first geometric feature may specifically be a frequency or probability of occurrence of the first geometric feature with the first design object among a plurality of first design objects of the same type. It can be seen that the strength of the co-occurrence relationship can be represented by the co-occurrence probability. If the co-occurrence relationship indicates that the probability (i.e., the co-occurrence probability) that a certain first geometric feature and a first design object occur simultaneously is higher (e.g., higher than a specified probability threshold), the first design object may be considered to have a higher probability of containing the first geometric feature, the first geometric feature may be considered to be a more commonly used geometric feature in the first design object, and the first geometric feature may be determined to be a target first geometric feature in the first geometric features contained in the corresponding first design object, so that the design knowledge corresponding to the target first geometric feature in the first design object may be used as a more trusted design knowledge corresponding to the first design object.
In some embodiments, in the knowledge graph, the weights of the design knowledge corresponding to the target first geometric feature are determined according to co-occurrence probabilities corresponding to the target first geometric feature in the co-occurrence relationship.
In the embodiment of the application, the weight can be determined according to the co-occurrence probability corresponding to the first geometric feature of the target in the co-occurrence relation. For example, the weight may be equal to the corresponding co-occurrence probability, or may be calculated according to a specified calculation method. The co-occurrence probability corresponding to the target first geometric feature refers to the probability occupied by a first design object containing the target first geometric feature in the plurality of first design objects.
It can be seen that in the knowledge graph, weights may be set for the design knowledge from the first design object (i.e., the design knowledge corresponding to the target first geometric feature). The weight of the design knowledge corresponding to the target first geometric feature may also be considered as the confidence level of the design knowledge corresponding to the target first geometric feature, and may reflect the confidence level of the design knowledge corresponding to the target first geometric feature.
And, since new first design objects can be continuously generated as the design task is continuously performed. According to the new first design object, the design knowledge and the weight thereof corresponding to the target first geometric feature in the knowledge graph can be continuously updated, so that the design knowledge in the knowledge graph is continuously perfect, the reliability is better, and a better data basis is provided for a user to execute the design task by using the design knowledge of the knowledge graph.
3. Joint characterization of design knowledge from multi-modal design materials
In the embodiment of the application, one or more modal design materials can be obtained, and based on any embodiment of knowledge mining, after the design knowledge corresponding to each design material is obtained, a knowledge graph can be constructed according to the design knowledge corresponding to each design material.
In some examples, the design material may be a multi-modal design material, and the knowledge graph may be constructed by a joint characterization of design knowledge from the multi-modal design material.
Specifically, referring to the example shown in fig. 5, a knowledge graph construction method in an embodiment of the present application may include steps 501-504.
Step 501, a multi-modal design material is obtained.
Step 502, obtaining design knowledge corresponding to the design material of each mode.
The specific manner of obtaining the design knowledge corresponding to the design material of each mode may refer to the above-mentioned related embodiments of knowledge mining on the design material, which are not described herein.
Step 503, after obtaining the design knowledge corresponding to the design material of each mode, performing joint characterization on the design knowledge corresponding to the design material of each mode to obtain the target joint characterization.
And step 504, constructing a knowledge graph according to the target joint characterization.
The joint characterization may involve at least two modes of the design material, and different cases of the at least two modes of the design material are described below by way of example.
And the joint representation 1 is a joint representation between the design knowledge corresponding to the first design object and the design knowledge corresponding to the first text.
Specifically, in some embodiments, the joint characterization is performed on the design knowledge corresponding to the design materials of each mode, so as to obtain a target joint characterization, including:
And carrying out joint characterization on the design knowledge corresponding to the first design object and the design knowledge corresponding to the first text to obtain the target joint characterization.
For example, the first text may include text describing design information of the first design object, such as a description document of the first design object, for describing design information of constraints related to the first design object, semantic information of the first design object, relationships between components in the first design object, and the like. In addition, the first text may also include text describing other design information, which is not limited in this embodiment of the present application.
In the joint characterization, feature encoding (embedding) may be performed on design knowledge corresponding to the first design object (for example, information such as the first geometric feature of the first design object) so as to convert the design knowledge into a vector form, and feature encoding may also be performed on design knowledge corresponding to the first text (for example, information such as an entity extracted from the first text, a relationship between entities, an attribute of the entity, etc.) so as to convert the design knowledge corresponding to the first design object and the design knowledge corresponding to the first text into a vector form, so that the design knowledge corresponding to the first design object and the design knowledge corresponding to the first text are mapped to the same feature space, and are described by the same form. In this way, the design knowledge corresponding to the first design object and the design knowledge corresponding to the first text can be characterized in the same feature space, that is, the joint characterization of the design knowledge corresponding to the first design object and the design knowledge corresponding to the first text is realized.
And the joint representation 2 is the joint representation between the design knowledge corresponding to the first design object and the design knowledge corresponding to the first image.
Specifically, in some embodiments, if the first image includes a first sub-image related to the first design object, then the joint characterization is performed on the design knowledge corresponding to the design material of each mode, so as to obtain a target joint characterization, including:
And carrying out joint characterization on the design knowledge corresponding to the first design object and the design knowledge corresponding to the first sub-image to obtain the target joint characterization.
Illustratively, the first image may include an image related to the first design object, may include a photographed picture corresponding to the first design object, an engineering drawing (e.g., a two-dimensional Computer AIDED DESIGN (CAD) drawing) describing the first design object, and so on.
In the joint characterization, the design knowledge corresponding to the first design object may be feature-coded to be converted into a vector form, and the design knowledge corresponding to the first image (for example, information about entities extracted in the first image, relationships between the entities, attributes of the entities, etc.) may be feature-coded to be converted into a vector form, so that the design knowledge corresponding to the first design object and the design knowledge corresponding to the first image are mapped to the same feature space and are described in the same form. In this way, the design knowledge corresponding to the first design object and the design knowledge corresponding to the first image can be characterized and fused in the same feature space, that is, the joint characterization of the design knowledge corresponding to the first design object and the design knowledge corresponding to the first image is realized.
And the joint representation 3 is the joint representation between the design knowledge corresponding to the first image and the design knowledge corresponding to the first text.
Specifically, in some embodiments, if the first image includes a first sub-image related to the first design object, then the joint characterization is performed on the design knowledge corresponding to the design material of each mode, so as to obtain a target joint characterization, including:
and carrying out joint characterization on the design knowledge corresponding to the first image and the design knowledge corresponding to the first text to obtain the target joint characterization.
For example, the first text may include second sub-text related to the first image, e.g., one or more of the first image and some of the second sub-text are included in the same document, and the design knowledge of the one or more of the first image and the second sub-text are jointly characterized so that the design knowledge of the one or more of the first image and the second sub-text may be fused in conjunction with context in the document.
It may be appreciated that in the embodiment of the present application, performing the joint characterization on the design knowledge corresponding to the design material of each mode may include one or more of the three joint characterization modes.
For example, when the multi-modal design material includes the first design object, the first text and the first image, feature encoding may be performed on the design knowledge corresponding to the three-modal design material, so that the design knowledge corresponding to the three-modal design material is converted into a vector form, and joint characterization of the design knowledge corresponding to the three-modal design material is performed, so as to achieve fusion between the design knowledge corresponding to the three-modal design material.
In the embodiment of the application, the specific occasion of the joint characterization can have various conditions.
For example, in some examples, joint characterization may be performed after mining of explicit knowledge (e.g., design knowledge including the first image correspondence and/or design knowledge of the first text unrelated to the first design object, etc.) is implemented to obtain a preliminary sub-graph recording the explicit knowledge, and joint characterization may be performed after mining of implicit knowledge (e.g., design knowledge including the first design object correspondence and/or design knowledge of the first text related to the first design object, etc.) is implemented to obtain a preliminary sub-graph recording the implicit knowledge. Then, a knowledge graph can be constructed according to the preliminary sub-graph recording explicit knowledge and the preliminary sub-graph recording implicit knowledge.
Or after the design knowledge corresponding to the design materials of all modes is obtained, the design knowledge corresponding to the design materials of all modes is subjected to joint characterization so as to construct a knowledge graph.
In the embodiment of the application, the design knowledge can be obtained from the multi-mode design material. Wherein the multi-modal design material includes a first design object, which may be considered a historical design object. After the first design object is obtained, the design knowledge implied in the first design object can be mined from the first design object based on geometric features, and the first design object and the design knowledge mined from the design materials of other modes are subjected to joint characterization, for example, target joint characterization in a vector form is adopted, so that fusion of the design knowledge of the multi-mode design materials is realized, and knowledge graph construction is realized. In addition, the knowledge graph can comprise design knowledge mined from the multi-mode design material, so that the constructed knowledge graph contains rich design knowledge, and a better data basis is provided for related application of the knowledge graph.
4. Knowledge fusion
In the embodiment of the present application, referring to the example shown in fig. 4, after the joint characterization, fusion of design knowledge corresponding to the multi-mode design material may be implemented, for example, semantic fusion and/or structural fusion may be performed, so as to obtain a fusion result.
Illustratively, the semantic fusion may include fusion of identical descriptions in descriptions of entities, relationships, attributes, etc., unification of paraphraseology, etc., to achieve semantic alignment and fusion of design knowledge corresponding to the multi-modal design material. The structural fusion can comprise the alignment and fusion of information such as the relationship among the entities, the attributes of the entities and the like so as to realize the structural alignment and fusion of design knowledge corresponding to the multi-mode design materials.
In this way, the knowledge graph can be constructed in a specified structured form according to the constructed schema and the fusion result, and can be stored in the graph database.
In addition, in the embodiment of the application, the specific form of the knowledge graph can have various conditions.
For example, in some examples, the knowledge graph includes a first sub-graph and at least one second sub-graph, the first sub-graph is used to describe common design knowledge in design knowledge corresponding to the multi-modal design material, the common design knowledge includes one or more of design knowledge from at least one first text, design knowledge from at least one first image, design knowledge from a plurality of first design objects, and design knowledge that co-occurrence probabilities in the plurality of first design objects satisfy a specified condition, and any one of the second sub-graph is derived from one or more of design knowledge corresponding to the one or more first design objects, design knowledge corresponding to the first text associated with the one or more first design objects, and design knowledge corresponding to the first image associated with the one or more first design objects.
It can be seen that in this example, the knowledge-graph may take the form of 1+n. Where "1" indicates that the knowledge-graph includes one sub-graph (i.e., a first sub-graph) describing common design knowledge, and "n" indicates that the knowledge-graph includes n sub-graphs (i.e., n second sub-graphs) of design knowledge extracted from independent historical design objects, where n may be a positive integer, that is, the number of second sub-graphs may be at least one.
The sources of common design knowledge in the first sub-graph may include one or more of design knowledge from at least one first text, design knowledge from at least one first image, design knowledge from a plurality of first design objects and co-occurrence probabilities in the plurality of first design objects satisfying a specified condition.
The common design knowledge may be considered as more general design knowledge in the design field, such as a matching relationship between the bolt standard and the nut standard, and the like. Thus, the at least one first text for obtaining common design knowledge may include text describing more canonical design knowledge in the design field, and in many scenarios does not include text describing a personalized design. For example, in some scenarios, the at least one first text may not include text related to the first design object, such as a description document of the first design object. Also, the at least one first image used to obtain the common design knowledge may also be an image describing more general and canonical design knowledge. The at least one first image for obtaining the common design knowledge may comprise, for example, a shot of a standard part or a two-dimensional CAD drawing of a standard part, and furthermore, the at least one first image for obtaining the common design knowledge may be an image of a document in which a text describing the design knowledge, which is more canonical in the design field, is located, so that in combination with the context information of the document, an accurate design knowledge is extracted from the text and the image in the document as the common design knowledge in the first sub-map.
Further, in some examples, the common design knowledge may include design knowledge from the plurality of first design objects and co-occurrence probabilities in the plurality of first design objects satisfy a specified condition.
In the embodiment of the application, the co-occurrence probability may be higher than a higher probability threshold, so as to ensure that the design knowledge is very general design knowledge in the plurality of first design objects. Further, in some examples, the number of the plurality of first design objects may also reach a specified number threshold to ensure that the design knowledge from the plurality of design objects is extracted from a sufficient number of first design objects to ensure versatility of the design knowledge. In this case, very general design knowledge from a plurality of first design objects may be used as common design knowledge in the first sub-map.
While the second sub-graph may be design knowledge obtained from a separate one or more first design objects of the history.
In the conventional knowledge graph, it is difficult to extract effective design knowledge from the first design objects, in the embodiment of the present application, the design knowledge with higher reliability may be extracted from one or more first design objects and one or more first design materials associated with the first design objects (for example, a first text associated with one or more first design objects, a first image associated with one or more first design objects, etc.) according to geometric features, based on co-occurrence relationships, etc., and may be stored through one or more second sub-graphs to be different from common design knowledge in the first sub-graph, so that the design knowledge in the second sub-graph may be updated conveniently according to the newly generated first design objects as the design task is continuously executed, and the user may select a design meeting the requirement from the first sub-graph and the second sub-graph according to actual requirements, so as to execute the design task.
When there are a plurality of second sub-maps, the different second sub-maps may be divided in a somewhat plurality of ways, which are not limited herein. For example, the different second sub-maps may record the design knowledge of the different types of first design objects, respectively.
It can be seen that through the first sub-graph and the second sub-graph, the design knowledge of different sources and different confidence levels can be stored, applied, updated and managed respectively.
In other examples, the knowledge graph may have other forms. For example, the knowledge graph may include sub-graphs corresponding to different design types, or the design knowledge corresponding to each design material may be fused into the same knowledge graph. Based on the mode of constructing the knowledge graph, the knowledge graph can be constructed and obtained for the subsequent design code generation task. In addition, as shown in the example of fig. 4, the knowledge graph may also perform updating iteration according to expert knowledge or a result of performing a subsequent design task performed based on the generated design code, for example, updating weights of corresponding design knowledge in the knowledge graph according to a specific situation of the target design object obtained by the design task.
In the traditional knowledge graph construction, knowledge extraction is generally only performed from text, but knowledge extraction is difficult to achieve from other forms of materials.
It may be appreciated that in the embodiment of the present application, the knowledge graph may be constructed by a cloud management platform and deployed in an infrastructure managed by the cloud management platform. Or the knowledge graph can be constructed by other devices, and then transmitted and deployed to the infrastructure managed by the cloud management platform.
2. Design code generation task oriented large language model enhancement and constraint
In some embodiments, large language model enhancements and constraints may be performed in advance in order to perform subsequent design code generation tasks, such that the large language model after the enhancements and constraints can perform the design code generation tasks more accurately.
Based on different scene requirements of the subsequent design code generation task, the reasoning mode of the enhanced and constrained large language model in the design code generation task can have various conditions, correspondingly, the specific form of training data and labels adopted for enhancing and constraining the large language model can have various conditions, and the following exemplary description is respectively carried out.
Specific form 1 of training data and its tag the training data comprises a preset design intent and a preset design knowledge, and the tag of the training data comprises a preset design code.
In this example, the specific generation manner of the preset design intent of any training data may refer to the generation manner of the design intent in the related embodiment of the design code generation portion based on the knowledge graph and the large language model, or the preset design intent may be from a third party, or the preset design intent may be manually configured in advance by the user.
The preset design knowledge in the training data may be design knowledge related to a preset design intention retrieved from a knowledge graph, or the preset design knowledge may be manually configured in advance by a user.
The tag of the training data may include a preset design code, and the preset design code may be an execution script of the design software, but does not include information of a preset design code module in a subsequent example, that is, the preset design code is not designed by calling at least one design code module, but includes an execution script of a specific execution instruction.
The specific type of preset design code is not limited herein and may be determined based on the needs of the design software that performs the design task. Illustratively, the preset design code may be a domain specific language (domain specific language, DSL), in particular, in some examples, the preset design code may be a python script since python has become a generic secondary development script for many design software.
Training data and specific form 2 of the label thereof, wherein the training data comprises preset design intention and preset design knowledge, the label of the training data comprises preset triplet information and at least one module identifier of a preset design code module, and the preset triplet information comprises an operation object, an operation instruction and an operation parameter corresponding to the at least one preset design code module.
In this example, the module identification, the specific content, and the creation of the call interface corresponding to the design code module may be pre-written for the design code module.
For example, a developer may perform problem deconstructing on a historical design task that implements the design of the historical design object, and obtain a plurality of design subtasks that implement the design of the historical design object, so as to write design code modules for one or more of the design subtasks, respectively, so that any one of the design code modules can execute the corresponding design subtask to implement the specified design function. It is understood that the design code module may also be considered a function.
Furthermore, in some examples, the design code modules may be atomized modules, that is, any of the design code modules may be broken down into minimal, non-subdividable code modules. In this way, the flexible combination of each design code module can be realized by flexibly applying each design code module when the design code is generated subsequently.
After writing the design code module, a call interface corresponding to the design code module may be created to facilitate calling the design code module through the call interface. In addition, a module identification (e.g., a name of the design code module, etc.) of each design code module may also be determined to uniquely identify the corresponding design code module. And, call information of the design code module can be normalized into the form of triplet information. In particular, the triplet information may include an operation object, an operation instruction, and an operation parameter. In this way, the learning from natural language to code generation can be indicated to the large language model through the structures of the object, predicate and object complement of the triplet information in the tag, so that the large language model after enhancement and constraint has the module identification capability of at least one preset design code module required for identifying the corresponding design task according to training data, and the preset triplet information can be extracted according to the training data, so that the interface call of the at least one preset design code module can be conveniently realized.
In this example, the tag of the training data may include preset triplet information and a module identifier of at least one preset design code module, but does not include an execution statement, so it cannot be directly used as an execution script.
Specific form 3 of training data and its tag the training data comprises a preset design intent and a preset design knowledge, the tag of the training data comprises a preset design code capable of invoking at least one design code module by means of preset triplet information and a module identification of the at least one preset design code module.
In the embodiment of the application, the label of the training data comprises the preset design code, and the preset design code not only comprises the preset triplet information and the module identification of at least one preset design code module, but also comprises the execution statement, so that the label can be used as an execution script. In the preset design code, at least one preset design code module can be called according to preset triplet information and the module identification of at least one preset design code module.
The inference modes of the large language model obtained by the enhancement and constraint of the different forms of the training data and the labels thereof in the design code generation task may be different, and the specific inference modes can refer to the description of the related embodiments of the subsequent design code generation task, which is not described herein.
In the embodiment of the application, after a plurality of groups of training data and labels thereof are obtained, the large language model to be enhanced can be enhanced and restrained in a back propagation mode and the like according to the plurality of groups of training data and labels thereof, so that the large language model obtained after the enhancement and restraint can generate corresponding design codes according to the input design intention.
Wherein in one example, a multi-agent (multi-agent) framework may be employed to enhance and constrain the large language model to be enhanced.
For example, in the example shown in fig. 6, an exemplary schematic diagram of the multi-agent framework.
In a multi-agent framework, one or more of intent understanding, problem deconstructing, large language models, knowledge maps, feedback models, and the like may be included. The process of enhancing and constraining the large language model to be enhanced comprises at least one iteration process, and the description is given below taking an ith iteration process in the at least one iteration process as an example.
The ith iteration process in the at least one iteration process comprises the steps of reasoning training data through a large language model to be enhanced in the ith iteration process to obtain output data, evaluating whether the output data meets specifications and/or constraints through a knowledge graph to obtain an evaluation result, and updating the large language model to be enhanced in the ith iteration process according to differences between the output data and labels and the evaluation result.
In the enhancing and restraining process, intention understanding can be carried out on information of preset design intention input by a user, and preset design knowledge corresponding to the preset design intention is inquired from a knowledge graph so as to obtain the preset design intention and the preset design knowledge. In some examples, in addition, the user's design intent may be subject to solution to obtain one or more design sub-tasks, each of which may be implemented by a design code module. In this way, the preset design intent and the preset design knowledge corresponding to each design subtask can be obtained as training data. Then, training data including preset design intentions and preset design knowledge corresponding to the design subtasks obtained by the problem solving can be used as input data, and the large language model to be enhanced is input, so that output data of the large language model to be enhanced is obtained.
The large language model may query the knowledge graph during reasoning or after obtaining the output data. For example, the large language model may query the knowledge-graph for specifications and/or constraints specifying a design during reasoning, and further, after obtaining the output data, the large language model may query the knowledge-graph for whether the output data meets the specifications and/or constraints to obtain an evaluation result that evaluates whether the output data meets the specifications and/or constraints. The specification may include design criteria stored in the knowledge graph, and may be considered as instructive conditions, which are generally objective and uniform. While constraints refer to restrictive conditions, which may be described in terms of functions or inequalities, etc.
In this way, according to the difference between the output data and the label and the evaluation result, the large language model to be enhanced in the ith iteration process is counter-propagated so as to update the weight of the large language model to be enhanced in the ith iteration process.
When the iteration times reach the designated times or the loss value of the output data of the large language model to be enhanced is converged to an expected state, the enhancement and the restraint of the large language model can be completed, and the large language model is obtained for the subsequent design code generation task.
In some examples, after the large language model is applied to the actual application scenario to perform the task of generating the design code, so as to obtain the target design object and output the target design object, the large language model may be further enhanced and constrained by the feedback model in the example shown in fig. 6 according to feedback information of the expert on the target design object, and so on.
Specifically, the expert may evaluate the accuracy of the target design object to obtain feedback information of the target design object. Then, the feedback model can inquire whether the feedback information accords with the specification and/or the constraint from the knowledge graph, and obtain target feedback information according to the corresponding inquiry result. Of course, in some examples, feedback information input by the expert on the target design object may be directly used as the target feedback information. After the target feedback information is obtained, the feedback model can perform reinforcement learning on the large language model through the target feedback information so as to further enhance and restrict the large language model in the application process and continuously improve the performance of the large language model.
It may be understood that, in the embodiment of the present application, the enhancement and constraint of the large language model may be implemented by the infrastructure managed by the cloud management platform executing the embodiment of the present application, or may be implemented in other devices, and then deployed to at least one server of the infrastructure.
3. Design code generation based on knowledge graph and large language model
After the knowledge graph is constructed and the large language model after the fine tuning is obtained, a design code generation task may be performed by the knowledge graph and the large language model, for example, the design code generation task may be performed by a design code generation service of the cloud management platform in the example shown in fig. 2, to implement a design code generation method.
Specifically, as shown in fig. 7, the cloud service-based design code generation method may include steps 701-703.
Step 701, providing a configuration interface.
The configuration interface is used for acquiring information of design intent input by a user, and the design intent is used for indicating generation of a target design object.
In the embodiment of the application, the cloud management platform can provide a configuration interface for the client of the user, so that the user can input information of the design intent through the configuration interface displayed by the client.
By way of example, the information of the design intent may include one or more of a second text, a second design object, a second image. Accordingly, the design intent may include one or more of an intent based on the second text, an intent based on the second design object, an intent based on the second image.
In this way, after one or more of the second text, the second design object, and the second image are acquired through the configuration interface, the design intent may be acquired from one or more of the second text, the second design object, and the second image. Since the design intent is derived based on information entered by the user, the design intent may be considered to describe the user's explicit design intent for the target design object.
However, since the specific content of the information of the design intent may be various, the manner of acquiring the design intent may be various.
For example, in some examples, the design intent includes an intent derived based on a second design object, the method further comprising:
Obtaining a second geometric feature of a second part associated with the target design object in the one or more parts included in the second design object, wherein the second geometric feature is used for describing a feature related to a geometric element in the second part;
according to the second geometric feature, an intent based on the second design object is obtained.
There may be a plurality of cases for the second design object input by the user. For example, the second design object may be a design object indicated by a user as a reference, or the second design object may be a design object for which a design task is directed, for example, a design task to be performed is an assembly design task, and the second design object may be an object to be assembled. In addition, the number of the second design objects may be one or more, which is not limited herein.
Wherein, in some embodiments, obtaining the second geometric feature of the second part associated with the target design object of the one or more parts included in the second design object comprises:
decomposing the second design object to obtain one or more layers of components in the second design object;
Obtaining a target level component from one or more levels of components in the second design object as a second component;
a second geometric feature of the second component is acquired.
In some embodiments, the second geometric feature comprises one or more of the following:
spatial information of geometric elements in the second component, semantic information describing the second component and/or geometric elements in the second component, structural features of the second component, gradient features of the second component.
In the embodiment of the present application, the specific content of the second geometric feature and the specific manner of acquiring the second geometric feature may refer to the specific content of the first geometric feature and the specific manner of acquiring the first geometric feature in any of the above embodiments, which are not described herein. In addition, in the embodiment of the present application, the process of decomposing the second design object to obtain the second geometric feature of the second component may also be understood as problem deconstructing.
After obtaining the information of the second geometric feature of the second design object, the second design object input by the user may be described by the information of the second geometric feature, thereby obtaining the intent based on the second design object.
In some embodiments, the design material includes a second image, the method further comprising:
Identifying design information in the second image by object detection and/or semantic segmentation, the design information in the second image including one or more of entities in the second image, relationships between entities in the second image, attributes of the entities in the second image;
the intent based on the second image is obtained from the design information in the second image.
In the embodiment of the application, any second image can be an independent image, or can be an image related to the second design object, for example, a photo, a two-dimensional CAD graph and the like of the second design object, or can be an image related to the second text, for example, the second text is located in the same document. The method for obtaining the design information of the second image may refer to the method for obtaining the design knowledge of the first image in the above embodiment, which is not described herein.
In some embodiments, the design material includes a second text.
The second text may be description text input by the user for describing the design requirement, for example, information on a target design object to be designed, or the like may be described.
In some examples, the intent understanding of the second text may also be performed by a machine learning model or other algorithm, or the like, to mine the design intent contained in the second text.
Specifically, in some embodiments, the design material includes a second text, the method further comprising:
and extracting the information from the second text by using a large language model for information extraction or other information extraction algorithms to obtain the design information corresponding to the second text.
For example, the problem deconstructing may be implemented by decomposing a complex design task into a plurality of design subtasks based on a thought tree (tree of thoughts, toT) and/or a chain of thought (CoT) through a large language model or other information extraction algorithm for information extraction to obtain the intent based on the second text, and also may be performed with intent understanding using a context of the second text, etc. to obtain the intent based on the second text.
In yet other examples, the second text may be directly used as an intent based on the second text.
Furthermore, in some embodiments, when the information entered by the user includes data of at least two modalities (e.g., including at least two of the second text, the second design object, and the second image), a joint characterization may be performed in order to achieve fusion of the intents corresponding to the two modalities.
For example, in some embodiments, the design intent includes an intent derived based on the second design object and an intent derived based on the second text, and obtaining the design intent may include:
Performing joint characterization on the second geometric features of the second part in the second design object and design information in the second text to obtain a first joint characterization;
From the first joint representation, a first joint representation intent is obtained, the first joint representation intent comprising an intent derived based on the second design object and an intent derived based on the second text.
In some embodiments, the design intent includes an intent derived based on the second design object and an intent derived based on the second image, and obtaining the design intent may include:
Performing joint characterization on the second geometric feature of the second part in the second design object and design information in the second image to obtain a second joint characterization;
According to the second linkage representation, a second linkage representation intent is obtained, the second linkage representation intent comprising an intent derived based on the second design object and an intent derived based on the second image.
In some embodiments, the design intent includes an intent based on the second text and an intent based on the second image, and obtaining the design intent may include:
carrying out joint characterization on the design information in the second text and the design information in the second image to obtain a third joint characterization;
according to the third combined representation, a third combined representation intent is obtained, the third combined representation intent comprising an intent derived based on the second text and an intent derived based on the second image.
In the embodiment of the application, the first combined representation, the second combined representation and the third combined representation can be in a vector form so as to uniformly describe the intention information of the data from different modes in the same feature space, thereby being convenient for subsequent uniform processing.
The specific generation manner of the first joint characterization, the second joint characterization, and the third joint characterization may refer to the related description in the related embodiment of the joint characterization with respect to the design knowledge corresponding to the design material of each mode, which is not described herein.
In the embodiment of the application, based on different situations of the information input by the user, the specific content of the design intent can have various situations.
In addition, the specific form of the design intent may be a variety of situations. For example, when the design intent includes only intent based on the second text, the design intent may be in text form. While the design intent may be in the form of a vector when the design intent includes intent derived from data from multiple modalities.
Also, in some examples, the design intent may also be in the form of a triplet including an operational object, an operational instruction, and an operational parameter. The information of the operation object, the operation instruction, and the operation parameter in the design intent may be described in a text or the like, or may be described in a vector form. It can be seen that in this example, the design intent includes triad information based on one or more of the second text, the second design object, or the second image.
Step 702, retrieving target design knowledge corresponding to the design intent from the knowledge graph.
One or more of the design knowledge from the first text, the design knowledge from the first design object, the design knowledge from the first image, and the target design knowledge describing an implicit design intent for the target design object are included in the knowledge graph.
After the design intent is obtained, target design knowledge corresponding to the design intent may be retrieved from the knowledge graph.
The knowledge graph can be constructed by adopting any embodiment of the knowledge graph construction, so that the knowledge graph can comprise one or more of design knowledge from a first text, design knowledge from a first design object and design knowledge from a first image. In this way, more comprehensive target design knowledge that matches one or more of the intent based on the second text, the intent based on the second design object, and the intent based on the second image may be retrieved from the knowledge graph.
For example, in some examples, in the conventional knowledge graph construction scheme, it is difficult to extract and store design knowledge from a historical design object (i.e., a first design object), and in the actual application process, it is also impossible to extract design intent from a second design object.
In the knowledge graph of any embodiment of the present application, the design knowledge may be extracted from the historical design object and stored. In this way, the user may input the second design object through the configuration interface, and the cloud management platform may extract an intent based on the second design object from the second design object input by the user and query the knowledge graph for related target design knowledge.
The target design knowledge is used for describing the implicit design intention of the target design object, in other words, the more comprehensive design intention which cannot be directly extracted from the input information of the user can be further mined through a knowledge graph according to the information of the linear design intention input by the user, so that a large language model can be more comprehensively guided to generate a reasonable target design code.
For example, in one example, if the information of the design intent input by the user indicates that the user designs the automobile, the design knowledge about the design specification, constraint, and the like related to the automobile design may be mined from the knowledge graph as target design knowledge, and the target design knowledge is not included in the input information of the user, but may be retrieved from the knowledge graph, so that the large language model may be guided to generate a target design code capable of more reasonably performing the automobile design in a subsequent step.
In step 703, the design intent and the target design knowledge are processed by the large language model to obtain the target design code.
The target design code is used to generate a target design object.
In the embodiment of the application, the design intention and the target design knowledge can be input into the large language model, so that the large language model is called to process the design intention and the target design knowledge, and the target design code is output.
The specific type of target design code is not limited herein and may be determined based on the needs of the design software executing the design task. Illustratively, the target design code may be a domain specific language (domain specific language, DSL), in particular, in some examples, the target design code may be a python script since python has become a generic secondary development script for many design software.
The target design code may be invoked by design software executing the design task to generate executable code to perform the corresponding design task, obtaining the target design object as a design result.
In the embodiment of the present application, there are various possible cases of a specific reasoning process for obtaining the target design code through the large language model, and the following exemplary descriptions are respectively given.
Reasoning mode 1 design intent and target design knowledge are input into the large language model for processing so that the large language model outputs the non-modularized target design code.
In this example, the large language model may output the target design code as an execution script of the design software through one call. In this example, the design code module in the following reasoning mode 2 is not obtained in advance, so in the embodiment of the present application, the target design code does not include information of at least one design code module, that is, the design is not performed by calling at least one design code module, and a specific execution script including the content of a specific execution instruction may be directly generated as the target design code.
The large language model in this example may be obtained by fine-tuning the first training data and its labels in the related embodiments using the large language model fine-tuning described above.
Reasoning mode 2 design intent and target design knowledge are input into the large language model for processing so that the large language model outputs target design code containing target triplet information and information of at least one design code module.
Specifically, in this example, the target design code includes target triplet information and information of at least one design code module, the at least one design code module is used to generate the target design object, and the target triplet information includes an operation object, an operation instruction, and an operation parameter corresponding to the at least one design code module.
In the embodiment of the present application, the design code module may be preconfigured, and in particular, reference may be made to a related embodiment of the large language model fine tuning, which is not described herein.
The at least one design code module can also be regarded as a function, each design code module can provide an interface for calling, and when the interface is called, the information of the triples of the operation object, the operation instruction and the operation parameter corresponding to the design code module can be used as the input information of the interface, so that the design code module is instructed to realize corresponding design according to the information of the operation object, the operation instruction and the operation parameter corresponding to the design code module.
The specific form of the target triplet information and the information of at least one design code module included in the target design code can have various situations, and the specific form of the target design code and the corresponding reasoning mode are respectively described below.
Reasoning mode 2.1, obtaining target design codes capable of carrying out interface call on at least one design code module through two times of call of the large language model.
Specifically, in some embodiments, step 703 comprises:
processing the design intent and the target design knowledge through a large language model to obtain target module identification and target triplet information, wherein the target module identification comprises module identifications corresponding to at least one design code module;
and obtaining the target design code through the large language model according to the target module identification and the target triplet information.
In the embodiment of the application, the design intent and the target design knowledge can be processed through the large language model to obtain the target module identification and the target triplet information. In this way, the large language model may determine at least one design code module (i.e., determine a target module identification of the at least one design code module) required for a current design task from a plurality of pre-configured design code modules based on design intent and target design knowledge, and may determine target triplet information required to execute the at least one design code module. The module identifier corresponding to a certain design code module may uniquely identify the corresponding design code module, and for example, the module identifier corresponding to a certain design code module may be a name or a number of the design code module.
Then, a second call may be made to the large language model to process the target module identification and the target triplet information through the large language model to generate a pass-through target design code, which may be used to call an interface of at least one design code module according to the target triplet information to implement the design task. It can be seen that in the target design code, an execution order of at least one design code module may be indicated, etc.
For example, in one exemplary scenario, a user's design intent for an automobile design may be obtained through a configuration interface, and target design knowledge for the automobile design corresponding to the design intent is queried in a knowledge graph. The design intent and target design knowledge may then be input into a large language model such that the large language model outputs target module identification and target triplet information for at least one design code module required for the automobile design. For example, the at least one design code module may include a design code module for designing an engine of an automobile, a design code module for designing a chassis of an automobile, a design code module for designing an outline of an automobile, and the like. The target module identification and target triplet information may then be processed through the large language model to generate a pass-through target design code. In the target design code, the interface of the design code module for designing the engine of the automobile can be instructed to be called for designing the engine of the automobile, then the interface of the design code module for designing the chassis of the automobile is called for designing the chassis of the automobile, and then the interface of the design code module for designing the appearance of the automobile is called for designing the appearance of the automobile, so that the automobile is reasonably designed from inside to outside, design conflicts are avoided, and unreasonable design results are caused.
In the embodiment of the application, the design code generating task can be executed in a layering manner through the two times of calling of the large language model, so that the generating logic of the executed design code is clearer, and the accuracy of the target design code can be ensured.
Reasoning mode 2.2 by a call of a large language model, target design code comprising target triplet information and information of at least one design code module is obtained, but the target design code does not comprise an execution statement, so that the target design code cannot be directly used as an execution script.
Specifically, in some embodiments, step 703 comprises:
And processing the design intent and the target design knowledge through a large language model to obtain target module identification and target triplet information as target design codes.
In this example, the target triplet information and the information of the at least one design code module in the target design code may be acquired by the design software to generate an execution script, so that the executable code is obtained according to the generated execution script to perform the design task, and the target design object is obtained as a design result.
In the example of the reasoning mode 2.1 and the example of the reasoning mode 2.2, the large language model adopted can be obtained by fine tuning in the mode of the second training data and the label thereof in the related embodiment of fine tuning by adopting the large language model.
Reasoning mode 2.3, obtaining the target design code capable of carrying out interface call on at least one design code module through one call of the large language model.
In this example, inputting the design intent and the target design knowledge into the large language model may cause the large language model to directly output the target design code capable of invoking the at least one design code module to achieve the corresponding design task. It can be seen that in the target design code, an execution order of at least one design code module may be indicated, etc. The target design code may be executed as an execution script of the design software.
In this example, the large language model may be obtained by fine-tuning the third training data and the label thereof in the related embodiment using the foregoing large language model fine-tuning.
It can be seen that in the embodiment of the application, not only the explicit design intent input by the user can be obtained from the configuration interface, but also the target design knowledge associated with the current explicit design intent of the user can be queried from the knowledge graph to capture the implicit design intent of the target design object.
Therefore, the design intent and the target design knowledge from the knowledge graph can be combined, the large language model is guided to efficiently and accurately generate the target design code meeting the design requirement of the user, the intelligent generation of the target design code is realized, the design task is accurately and efficiently realized according to the target design code, and the target design object is obtained.
4. Calling the generated design code to design
In the embodiment of the application, after the target design code is obtained, the generated target design code can be called by corresponding design software to carry out design so as to obtain a target design object.
It should be noted that in the embodiment of the present application, the target design code generated by calling the corresponding design software may be designed in the cloud service (for example, the design scheme generating service shown in fig. 2) provided by the cloud management platform, so as to obtain the target design object, and then the target design object may be output to the user, or the target design code may be obtained by the cloud management platform and then output to the user, so that the target design object is obtained by the user through the design software of the client.
In a specific design task execution process, the design software may call the generated target design code to determine the components needed to generate the target design object.
For example, in some examples, when the target design code is invoked to design the target design object, if the target design code indicates that a certain component in the target design object is operated on, the component may be determined to be a component required for generating the target design object, and the component required for generating the target design object may be retrieved from a preset component database by a nearest neighbor component retrieval method or the like. The preset component database may store a plurality of preset components, any preset component may be in the form of a general component or the like so as to be general in various design tasks, and at this time, the preset component may be regarded as a standard component. Of course, in some examples, the preset component database may also include other components besides standard components, which are not limited herein.
Illustratively, the preset parts matching the parts required to generate the target design object may be retrieved from a preset parts database by means of a structural topology or the like.
If a matching preset part is retrieved from the preset part database, the matching preset part may be obtained as a part required to generate the target design object.
Or in some examples, when the target design code is called to design the target design object, if the target design code indicates to operate on a target component which does not belong to the preset component database in the first object, that is, a preset component matched with the target component is not searched in the preset component database, based on the component parameters of the target component in the target design code, searching a target preset component, of which the similarity with the target component meets the specified condition, in a plurality of preset components in the preset component database. For example, the similarity meeting the specified condition may be that among a plurality of preset parts in the preset part database, the preset part is consistent with the topology of the target part and has the highest similarity with the target part, or that the similarity is higher than a specified similarity threshold value, or the like.
Then, shape fitting may be performed according to the target preset part and the part parameters of the target part, and the target part may be generated to operate on the target part according to the target design code.
Specifically, the target preset part may be used as a reference of the target part, and when the topology of the target preset part is consistent with that of the target part, a difference in dimensions between the target part and the target preset part may be obtained based on the topology, and shape fitting may be performed based on the difference and related triplet information, etc., so as to obtain the target part satisfying the shape requirement in the target design code as a part required for generating the target design object.
Upon obtaining the components required to generate the target design object, the design software may invoke the generated target design code, execute the design instructions from bottom to top (e.g., from component to higher level component to target design object) according to the topology level (e.g., DAG) etc. of the components required to generate the target design object, according to the design parameters in the target triplet information, etc. to generate the target design object.
After the target design object is generated, the target design object can be optimized, and the optimized target design object is output to the user.
For example, in some embodiments, after obtaining the target design code, further comprising:
Acquiring a target design object obtained by calling a target design code for design;
Inquiring global constraint information corresponding to a target design object from the knowledge graph;
And optimizing the target design object according to the global constraint information.
In the embodiment of the application, in the design process, the target design code generally indicates that the design is performed from bottom to top and from local to global, and the designed target design object is generally formed by combining a plurality of components.
Therefore, in order to ensure the rationality of the target design object, the knowledge graph can be queried for global constraint information corresponding to the target design object by the large language model. Then, whether the target design object meets the queried global constraint information can be detected through a large language model and the like, and if not, the target design object can be adjusted according to the global constraint information to obtain the optimized target design object.
In addition, in some examples, the cloud management platform may also receive feedback information of the user on the generated target design object to determine an optimization strategy for the large language model and/or the knowledge graph according to the feedback information, so as to optimize the large language model and/or the knowledge graph.
For example, reinforcement learning may be used to optimize a large language model, and the target design object and its related information may be referred to a related scheme of knowledge graph construction to optimize the knowledge graph. For example, the weights corresponding to the respective design knowledge in the knowledge graph may be updated according to the design knowledge employed by the target design object. For example, if the feedback information of the user indicates that the target design object is an accurate design object, the weight corresponding to the design knowledge adopted by the target design object in the knowledge graph may be increased, and if the feedback information of the user indicates that the design of a certain component in the target design object is inaccurate, the weight corresponding to the design knowledge associated with the design of the component in the knowledge graph may be reduced.
An exemplary flow diagram of an embodiment of the present application is shown in fig. 8.
In the example shown in fig. 8, among others, the design task may be an assembly design task.
In this exemplary scenario, the intent understanding module may obtain the design intent, which may include one or more of an intent based on the second text, an intent based on the second design object, an intent based on the second image.
The intent understanding module may then query the target design knowledge corresponding to the design intent from the knowledge graph, thereby inputting the target design knowledge as well as the design intent to the large language model.
The large language model processes the target design knowledge and design intent to obtain target design code and output it to the design software.
The design software may invoke target design code to perform one or more steps such as component matching, shape fitting, assembly parameter determination, and assembly to obtain an assembly solution as a target design object.
The assembly scheme may then be globally optimized. If the user determines that the optimized assembly scheme does not meet the requirements through the designated configuration interface, the design intent acquisition and the subsequent steps can be re-executed to regenerate the design code to obtain a new assembly scheme.
In the following, taking a modeling design scene of a vehicle as an example, an exemplary information interaction schematic diagram of an embodiment of the present application is described.
In one example, as shown in FIG. 9, a vehicle may be modeled, and modeling design software may be included in a modeling design system, which may be located on a cloud management platform, for example.
The example may specifically include the following steps:
1. a user inputs data such as text, design objects, and/or images to the build design system;
2. the styling system preprocesses data entered by the user, such as data decomposition, feature recognition, etc.
3. The large language model carries out intention understanding on the preprocessed user input data to obtain design intention;
4. the large language model searches the knowledge graph for design knowledge related to the design intent;
5. the knowledge graph returns target design knowledge related to the design intention to the large language model;
6. the large language model generates target design codes according to the design intention and target design knowledge;
7. the large language model returns the target design code to modeling design software of the modeling design system;
6. the modeling design software calls a target design code to generate a preliminary design scheme as a target design object;
7. The large language model inquires global constraint information of the preliminary design scheme from the knowledge graph;
8. The knowledge graph returns global constraint information to the large language model;
9. The large language model carries out global optimization on the primary design scheme according to global constraint information to obtain an optimized design scheme;
10. returning the optimized design scheme to the modeling design system by the large language model;
11. the modeling design system outputs the optimized design proposal to the user;
12. the user sends feedback information of the optimized design scheme to the modeling design system;
13. the modeling design system feedback information formulates an optimization strategy for the large language model and the knowledge graph;
14. Optimizing the large language model;
15. And optimizing the knowledge graph.
In another example, the vehicle may be designed by modeling, and then the vehicle may be assembled according to the modeling result, where the modeling system may include modeling software, and the assembling system may include assembling software. For example, the build design system and the assembly design system may be located on a cloud management platform. As shown in fig. 10, this example may specifically include the steps of:
1. User 1 (e.g., a stylist) enters first input data, such as text, design objects, and/or images, into a stylist system;
2. the styling system preprocesses data entered by the user, such as data decomposition, feature recognition, etc.
3. The large language model for modeling design is used for carrying out intention understanding on the preprocessed user input data to obtain design intention;
4. the large language model for modeling design searches the knowledge graph for design knowledge related to design intent;
5. The knowledge graph for modeling design returns first target design knowledge related to design intent to the large language model;
6. the large language model for modeling design generates a first target design code according to the design intent and first target design knowledge;
7. The large language model for modeling design returns the first target design code to modeling design software of the modeling design system;
6. the modeling design software calls a first target design code to generate a primary modeling design scheme;
7. Inquiring global constraint information of a primary modeling design scheme from a knowledge graph for modeling design by using a large language model for modeling design;
8. The knowledge graph for modeling design returns global constraint information to the large language model for modeling design;
9. the large language model for modeling design carries out global optimization on the primary modeling design scheme according to global constraint information to obtain an optimized modeling design scheme;
10. the large language model for modeling design returns the optimized modeling design scheme to the modeling design system;
11. the build design system outputs the optimized build design to the user 2 (e.g., an assembly designer);
12. user 2 inputs second input data such as text, design objects, and/or images to the assembly design system;
13. The assembly design system preprocesses data input by a user, such as data decomposition, feature recognition and the like;
14. the large language model for assembly design is used for carrying out intention understanding on the preprocessed user input data to obtain design intention;
15. the large language model for the assembly design retrieves design knowledge related to the design intent from the knowledge graph;
16. The knowledge graph for the assembly design returns second target design knowledge related to the design intent to the large language model;
17. generating a second target design code by the large language model for the assembly design according to the design intent and the second target design knowledge;
18. the large language model for the assembly design returns the second target design code to the assembly design software of the assembly design system;
19. The assembly design software calls a second target design code to generate a preliminary assembly design scheme;
20. The large language model for the assembly design queries a knowledge graph for the assembly design for global constraint information of the preliminary assembly design scheme;
21. The knowledge graph for the assembly design returns global constraint information to the large language model for the assembly design;
22. the large language model for the assembly design carries out global optimization on the preliminary assembly design scheme according to global constraint information to obtain an optimized assembly design scheme;
23. the large language model for the assembly design returns the optimized assembly design scheme to the assembly design system;
24. The user 2 sends feedback information of the optimized assembly design scheme to the assembly design system;
25. the assembly design system establishes an optimization strategy for a large language model for assembly design and a knowledge graph for assembly design according to the feedback information;
26. optimizing a large language model for assembly design;
27. And optimizing the knowledge graph for assembly design.
Having described the design code generation method provided by the embodiment of the present application from a plurality of aspects, the design code generation apparatus 11 and the knowledge graph construction apparatus 12 based on the cloud service provided by the embodiment of the present application are described below with reference to the accompanying drawings.
As shown in fig. 11, an embodiment of the present application provides a cloud service-based design code generating apparatus 11 applied to a cloud management platform for managing an infrastructure for providing cloud services, the infrastructure including a plurality of areas each including at least one cloud data center, the cloud services operating on at least one server of the at least one cloud data center located in the plurality of areas, the apparatus 11 comprising:
An interface module 1101, configured to provide a configuration interface, where the configuration interface is configured to obtain information of a design intent input by a user, and the design intent is configured to indicate generation of a target design object;
A processing module 1102, configured to:
Retrieving target design knowledge corresponding to the design intent from a knowledge graph, wherein the knowledge graph comprises one or more of design knowledge from a first text, design knowledge from a first design object and design knowledge from a first image, and the target design knowledge is used for describing implicit design intent of the target design object;
And processing the design intent and the target design knowledge through the large language model to obtain target design codes, wherein the target design codes are used for generating target design objects.
Optionally, the interface module 1101 is configured to:
acquiring a multi-modal design material, the multi-modal design material comprising a first design object, and the multi-modal design material further comprising one or more of a first text and a first image;
the processing module 1102 is configured to:
obtaining, from the first design object, a first geometric feature of a first part of the one or more parts comprised by the first design object, the first geometric feature being used to describe a feature associated with a geometric element in the first part;
According to the first geometric characteristics, obtaining design knowledge corresponding to a first design object;
After the design knowledge corresponding to the design material of each mode is obtained, carrying out joint characterization on the design knowledge corresponding to the design material of each mode to obtain a target joint characterization;
And constructing a knowledge graph according to the target joint characterization.
Optionally, the target design code includes target triplet information and information of at least one design code module, the at least one design code module is used for generating a target design object, and the target triplet information includes an operation object, an operation instruction and an operation parameter corresponding to the at least one design code module.
Optionally, the processing module 1102 is configured to:
processing the design intent and the target design knowledge through a large language model to obtain target module identification and target triplet information, wherein the target module identification comprises module identifications corresponding to at least one design code module;
and obtaining the target design code through the large language model according to the target module identification and the target triplet information.
Optionally, the design intent includes an intent based on a second design object,
The interface module 1101 is configured to obtain a second geometric feature of a second part associated with a target design object of the one or more parts included in the second design object, where the second geometric feature is configured to describe a feature related to a geometric element in the second part;
the processing module 1102 is configured to obtain an intent based on the second design object based on the second geometric feature.
Optionally, the processing module 1102 is configured to:
decomposing the second design object to obtain one or more layers of components in the second design object;
Obtaining a target level component from one or more levels of components in the second design object as a second component;
a second geometric feature of the second component is acquired.
Optionally, the design intent further includes an intent derived based on the second text, and the processing module 1102 is configured to:
performing joint characterization on the second geometric features and the design information in the second text to obtain a first joint characterization;
From the first joint representation, a first joint representation intent is obtained, the first joint representation intent comprising an intent derived based on the second design object and an intent derived based on the second text.
Optionally, the design intent further includes an intent derived based on the second image, and the processing module 1102 is configured to:
Identifying design information in the second image by object detection and/or semantic segmentation, the design information in the second image including one or more of entities in the second image, relationships between entities in the second image, attributes of the entities in the second image;
Performing joint characterization on the second geometric feature and design information in the second image to obtain a second joint characterization;
According to the second linkage representation, a second linkage representation intent is obtained, the second linkage representation intent comprising an intent derived based on the second design object and an intent derived based on the second image.
Optionally, the second geometric feature comprises one or more of:
spatial information of geometric elements in the second component, semantic information describing the second component and/or geometric elements in the second component, structural features of the second component, gradient features of the second component.
As shown in fig. 12, an embodiment of the present application provides a knowledge graph construction apparatus 12, which is applied to a cloud management platform, where the cloud management platform is used to manage an infrastructure for providing cloud services, and the infrastructure includes a plurality of areas, each area including at least one cloud data center, and the cloud services are run on at least one server located in at least one cloud data center of the plurality of areas. The device 12 comprises:
An interface module 1201, configured to obtain a multi-modal design material, where the multi-modal design material includes a first design object, and the multi-modal design material further includes one or more of a first text, a first image;
a processing module 1202 for:
obtaining, from the first design object, a first geometric feature of a first part of the one or more parts comprised by the first design object, the first geometric feature being used to describe a feature associated with a geometric element in the first part;
According to the first geometric characteristics, obtaining design knowledge corresponding to a first design object;
After the design knowledge corresponding to the design material of each mode is obtained, carrying out joint characterization on the design knowledge corresponding to the design material of each mode to obtain a target joint characterization;
And constructing a knowledge graph according to the target joint characterization.
Optionally, the interface module 1201 is configured to provide a configuration interface for acquiring information of a design intent input by a user, the design intent being used for indicating generation of a target design object;
the processing module 1202 is configured to:
retrieving target design knowledge corresponding to the design intent from the knowledge graph, wherein the target design knowledge is used for describing the implicit design intent of the target design object;
And processing the design intent and the target design knowledge through the large language model to obtain target design codes, wherein the target design codes are used for generating target design objects.
Optionally, the number of the first design objects is a plurality, and the processing module 1202 is configured to:
Determining target first geometric features from the first geometric features corresponding to the first design objects according to the co-occurrence relation of the first geometric features corresponding to the first design objects;
and obtaining design knowledge corresponding to the first design object according to the target first geometric feature.
Optionally, in the knowledge graph, the weight of the design knowledge corresponding to the first geometric feature of the target is determined according to the co-occurrence probability corresponding to the first geometric feature of the target in the co-occurrence relationship.
Optionally, the knowledge graph comprises a first sub-graph and at least one second sub-graph, the first sub-graph is used for describing common design knowledge in design knowledge corresponding to the multi-mode design material, the common design knowledge comprises one or more of design knowledge from at least one first text, design knowledge from at least one first image, design knowledge from a plurality of first design objects, and design knowledge that co-occurrence probability in the plurality of first design objects meets a specified condition, and any one of the second sub-graph is obtained according to one or more of design knowledge corresponding to one or more first design objects, design knowledge corresponding to first text associated with one or more first design objects, and design knowledge corresponding to first image associated with one or more first design objects.
The processing module and the interface module can be realized by software or hardware. Illustratively, the implementation of the processing module is described next as an example of the processing module. Similarly, the implementation of the interface module may refer to the implementation of the processing module.
Modules as an example of a software functional unit, a processing module may include code that runs on a computing instance. The computing instance may include at least one of a physical host (computing device), a virtual machine, and a container, among others. Further, the above-described computing examples may be one or more. For example, the processing module may include code running on multiple hosts/virtual machines/containers. It should be noted that, multiple hosts/virtual machines/containers for running the code may be distributed in the same region (region), or may be distributed in different regions. Further, multiple hosts/virtual machines/containers for running the code may be distributed in the same availability zone (availability zone, AZ) or may be distributed in different AZs, each AZ comprising one data center or multiple geographically close data centers. Wherein typically a region may comprise a plurality of AZs.
Also, multiple hosts/virtual machines/containers for running the code may be distributed in the same virtual private cloud (virtual private cloud, VPC) or may be distributed in multiple VPCs. In general, one VPC is disposed in one region, and a communication gateway is disposed in each VPC for implementing inter-connection between VPCs in the same region and between VPCs in different regions.
Modules as an example of hardware functional units, a processing module may include at least one computing device, such as a server or the like. Alternatively, the processing module may be implemented by a central processing unit (central processing unit, CPU), an application-specific integrated circuit (ASIC), a programmable logic device (programmable logic device, PLD), or the like. The PLD may be implemented as a complex program logic device (complex programmable logical device, CPLD), a field-programmable gate array (FPGA) GATE ARRAY, general-purpose array logic (GENERIC ARRAY logic, GAL), a data processing unit (data processing unit, DPU), a neural network processing unit (Neural Network Processing Unit, NPU), a system on chip (SoC), an off-load card, an accelerator card, or any combination thereof.
Multiple computing devices included in a processing module may be distributed in the same region or may be distributed in different regions. The plurality of computing devices included in the processing module may be distributed in the same AZ or may be distributed in different AZ. Also, multiple computing devices included in a processing module may be distributed in the same VPC, or may be distributed among multiple VPCs. The plurality of computing devices may be any combination of computing devices such as servers, ASIC, PLD, CPLD, FPGA, GAL, DPU, NPU, soC, off-load cards, accelerator cards, and the like.
It should be noted that, in other embodiments, the processing module may be configured to execute any step in the cloud service-based design code generation method or the knowledge graph construction method, the interface module may be configured to execute any step in the cloud service-based design code generation method or the knowledge graph construction method, the steps that the processing module and the interface module are responsible for implementing may be specified according to needs, and all functions of the cloud service-based design code generation apparatus or the knowledge graph construction apparatus are implemented by the processing module and the interface module respectively implementing different steps in the cloud service-based design code generation method or the knowledge graph construction method.
The present application also provides a computing device 130. As shown in FIG. 13, computing device 130 includes a bus 132, a processor 134, a memory 136, and a communication interface 138. Communication between processor 134, memory 136, and communication interface 138 is via bus 132. Computing device 130 may be a server or a terminal device. It should be understood that the present application is not limited to the number of processors, memories in computing device 130.
Bus 132 may be a peripheral component interconnect standard (PERIPHERAL COMPONENT INTERCONNECT EXPRESS, PCIe) bus or an extended industry standard architecture (extended industry standard architecture, EISA) bus, a unified bus (unified bus, ubus or UB), a computer quick link (compute express link, CXL), a cache coherent interconnect protocol (cache coherent interconnect for accelerators, CCIX), or the like. Wherein the unified bus is also referred to as a qu bus. The buses may be divided into address buses, data buses, control buses, etc. For ease of illustration, only one line is shown in fig. 13, but not only one bus or one type of bus. Bus 132 may include a path to transfer information between various components of computing device 130 (e.g., memory 136, processor 134, communication interface 138).
Processor 134 may include any one or more of a central processing unit (central processing unit, CPU), a graphics processor (graphics processing unit, GPU), a Microprocessor (MP), or a digital signal processor (DIGITAL SIGNAL processor, DSP), ASIC, FPGA, CPLD, NPU, soC, an off-load card, an accelerator card, or the like.
The memory 136 may include volatile memory (RAM), such as random access memory (random access memory). The memory 136 may also include one or more of non-volatile memory (ROM), such as read-only memory (ROM), flash memory, mechanical hard disk (HARD DISK DRIVE, HDD), or Solid State Disk (SSD). The memory 136 may also be implemented by a storage class memory (Storage class memory, SCM), phase change memory (PHASE CHANGE memory, PCM), or other type of storage medium.
It should be noted that the same type of storage medium may be configured in the same computing device to implement the functions of the memory 136, and two or more types of storage media may be configured to implement the functions of the memory 136, which is not limited in this disclosure.
The memory 136 stores executable program codes, and the processor 134 executes the executable program codes to implement the functions of the foregoing interface module and the processing module, respectively, so as to implement the cloud service-based design code generation method or the knowledge graph construction method applied to the cloud management platform in the foregoing embodiment, that is, the memory 136 stores instructions for executing the cloud service-based design code generation method or the knowledge graph construction method applied to the cloud management platform in the foregoing embodiment.
The communication interface 138 enables communication between the computing device 130 and other devices or communication networks using a transceiver module such as, but not limited to, a network interface card, transceiver, or the like.
The embodiment of the application also provides a computing device cluster. The cluster of computing devices includes at least one computing device. The computing device may be a server, such as a central server, an edge server, or a local server in a local data center. In some embodiments, the computing device may also be a terminal device such as a desktop, notebook, or smart phone.
As shown in fig. 14, the cluster of computing devices includes at least one computing device 130. The same instructions for performing the design code generation method or the knowledge graph construction method may be stored in the memory 136 in one or more computing devices 130 in the computing device cluster.
In some possible implementations, the memory 136 of one or more computing devices 130 in the computing device cluster may also have stored therein partial instructions for executing a cloud service-based design code generation method or a knowledge graph construction method, respectively. In other words, a combination of one or more computing devices 130 may collectively execute instructions for performing a cloud service-based design code generation method or a knowledge graph construction method.
It should be noted that, the memory 136 in different computing devices 130 in the computing device cluster may store different instructions for executing part of functions of the cloud service-based design code generation method or the knowledge graph construction method, respectively. That is, the instructions stored by the memory 136 in the different computing devices 130 may implement the functionality of one or more of the interface modules and the processing modules.
In some possible implementations, one or more computing devices in a cluster of computing devices may be connected through a network. Wherein the network may be a wide area network or a local area network, etc. Fig. 15 shows one possible implementation. As shown in fig. 15, the computing device 130A and the computing device 130B are connected by a network. Specifically, the connection to the network is made through a communication interface in each computing device. In this type of possible implementation, instructions to perform the functions of the interface module may be stored in memory 136 in computing device 130A. Meanwhile, the memory 136 in the computing device 130B may have instructions stored therein to perform the functions of the processing module. Or in other examples, instructions to perform part of the functions of a processing module may be stored in memory 136 in computing device 130A. Meanwhile, the memory 136 in the computing device 130B may have instructions stored therein to perform another portion of the functions of the processing module.
It should be appreciated that the functionality of computing device 130A shown in fig. 15 may also be performed by multiple computing devices 130. Likewise, the functionality of computing device 130B may also be performed by multiple computing devices 130.
The embodiment of the application also provides another computing device cluster. The connection between computing devices in the computing device cluster may be similar with reference to the connection of the computing device clusters of fig. 14 and 15. In contrast, the same instructions for executing the cloud service-based design code generation method or the knowledge graph construction method may be stored in the memory 136 in one or more computing devices 130 in the computing device cluster.
In some possible implementations, the memory 136 of one or more computing devices 130 in the computing device cluster may also have stored therein partial instructions for executing a cloud service-based design code generation method or a knowledge graph construction method, respectively. In other words, a combination of one or more computing devices 130 may collectively execute instructions for performing a cloud service-based design code generation method or a knowledge graph construction method.
It should be noted that, the memory 136 in different computing devices 130 in the computing device cluster may store different instructions for performing part of the functions of the cloud service-based design code generation method or the knowledge graph construction method. That is, the instructions stored by the memory 136 in the different computing devices 130 may implement the functionality of one or more of the interface modules and the processing modules.
Embodiments of the present application also provide a computer program product comprising instructions. The computer program product may be a software or program product containing instructions capable of running on a computing device or stored in any useful medium. The computer program product, when run on at least one computing device, causes the at least one computing device to perform a cloud service-based design code generation method or a knowledge graph construction method.
The embodiment of the application also provides a computer readable storage medium. Computer readable storage media can be any available media that can be stored by a computing device or data storage device such as a data center containing one or more available media. Usable media may be magnetic media (e.g., floppy disks, hard disks, magnetic tape), optical media (e.g., DVD), or semiconductor media (e.g., solid state disk), among others. The computer-readable storage medium includes instructions that instruct a computing device to perform a cloud service-based design code generation method or a knowledge graph construction method.
The embodiment of the application also provides a chip system which comprises a processor, wherein the processor is used for realizing the steps executed by the computing device cluster. In one possible design, the chip system may further include memory to hold the necessary program instructions and data. The chip system can be composed of chips, and can also comprise chips and other discrete devices.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
In the several embodiments provided in the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of elements is merely a logical functional division, and there may be additional divisions of actual implementation, e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods of the embodiments of the present application. The storage medium includes a usb disk, a removable hard disk, a read-only memory (ROM), a random-access memory (RAM, random access memory), a magnetic disk, an optical disk, or other various media capable of storing program codes.