[go: up one dir, main page]

CN119272390B - Cloud drawing intelligent editing method - Google Patents

Cloud drawing intelligent editing method Download PDF

Info

Publication number
CN119272390B
CN119272390B CN202411784926.4A CN202411784926A CN119272390B CN 119272390 B CN119272390 B CN 119272390B CN 202411784926 A CN202411784926 A CN 202411784926A CN 119272390 B CN119272390 B CN 119272390B
Authority
CN
China
Prior art keywords
editing
layer
target
influence
layers
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202411784926.4A
Other languages
Chinese (zh)
Other versions
CN119272390A (en
Inventor
何小敏
郑俐
只飞
贾若
李钍
李志勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Honghu Yuntu Technology Co ltd
Original Assignee
Beijing Honghu Yuntu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Honghu Yuntu Technology Co ltd filed Critical Beijing Honghu Yuntu Technology Co ltd
Priority to CN202411784926.4A priority Critical patent/CN119272390B/en
Publication of CN119272390A publication Critical patent/CN119272390A/en
Application granted granted Critical
Publication of CN119272390B publication Critical patent/CN119272390B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention discloses an intelligent editing method of cloud drawings, which relates to the technical field of drawing editing, and comprises the steps of setting an editing target area of drawings through a cloud, and carrying out target analysis according to the editing target area to obtain an editing direct target layer and an effect recognition collaborative layer; and carrying out drawing editing based on the editing direct target layer, carrying out editing effect identification evaluation through the effect identification cooperative layer, and outputting editing annotation. The method solves the technical problems of inaccurate editing area, incomplete layer relation consideration and unreasonable editing tool selection in the prior art, and achieves the technical effects of accurately determining the editing area, intelligently analyzing and identifying the layers and efficiently executing editing operation, and improving the accuracy and efficiency of editing the cloud drawing.

Description

Cloud drawing intelligent editing method
Technical Field
The invention relates to the technical field of drawing editing, in particular to an intelligent cloud drawing editing method.
Background
In the background of continuous development in the fields of construction, engineering, design and the like, the importance of cloud drawing editing is increasingly highlighted. However, the conventional cloud drawing editing method faces a series of problems. First, in the determination of the editing area, lack of precision often requires a lot of time to identify and define manually, and is prone to false selections or missing key areas. Secondly, the processing of the layer relationships is not perfect enough, and it is difficult to comprehensively and accurately identify the direct target layer closely related to the editing operation and the effect identification collaborative layer possibly affected, which may cause some important layer association effects to be ignored in the editing process. Furthermore, the selection of editing tools lacks intelligence, and the editors need to choose appropriate tools from a multitude of functionally diverse CAD tools by their own experience, which is inefficient in facing complex editing tasks and difficult to guarantee the rationality of the selection.
The prior art has the technical problems of inaccurate editing area, incomplete layer relation consideration and unreasonable editing tool selection.
Disclosure of Invention
The application provides an intelligent cloud drawing editing method which is used for solving the technical problems of inaccurate editing area, incomplete layer relation consideration and unreasonable editing tool selection in the prior art.
In view of the above problems, the application provides an intelligent cloud drawing editing method, which comprises the following steps:
Setting an editing target area of a drawing through a cloud, carrying out target analysis according to the editing target area to obtain an editing direct target layer and an effect recognition collaborative layer, respectively generating a layer locking instruction and a collaborative display instruction according to the editing direct target layer and the effect recognition collaborative layer, wherein the layer locking instruction is used for locking the editing direct target layer, the collaborative display instruction is used for carrying out editing synchronous checking operation on the effect recognition collaborative layer, carrying out drawing editing based on the editing direct target layer, carrying out editing effect recognition evaluation through the effect recognition collaborative layer, and outputting editing notes which can be displayed in a collaborative mode through the collaborative display instruction.
One or more technical schemes provided by the application have at least the following technical effects or advantages:
Setting an editing target area of a drawing through a cloud, carrying out target analysis according to the editing target area to obtain an editing direct target layer and an effect recognition collaborative layer, respectively generating a layer locking instruction and a collaborative display instruction according to the editing direct target layer and the effect recognition collaborative layer, carrying out drawing editing based on the editing direct target layer, carrying out editing effect recognition evaluation through the effect recognition collaborative layer, and outputting editing comments. The technical effects of accurately determining the editing area, intelligently analyzing and identifying the image layer and efficiently executing editing operation are achieved, and the accuracy and efficiency of editing the cloud drawing are improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of an intelligent editing method for cloud drawing provided by an embodiment of the application;
Fig. 2 is a schematic flow diagram of an edge node of a connection matching operation function of the cloud drawing intelligent editing method according to an embodiment of the present application.
Detailed Description
The application provides an intelligent cloud drawing editing method which is used for solving the technical problems of inaccurate editing area, incomplete layer relation consideration and unreasonable editing tool selection in the prior art.
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application. It will be apparent that the described embodiments are only some, but not all, embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
In an embodiment, as shown in fig. 1, the present application provides a cloud drawing intelligent editing method, which includes:
And step 100, setting an editing target area of the drawing through a cloud, and carrying out target analysis according to the editing target area to obtain an editing direct target layer and an effect recognition collaborative layer.
Specifically, in the process of editing the cloud drawing, an editing target area of the drawing is set according to specific requirements through a cloud platform. This can be done in a number of ways, such as framing with a specific tool on the cloud interface, or entering a specific coordinate range to determine the region, the setting of which provides an explicit direction for subsequent operations. And carrying out target analysis according to the determined editing target area, and deeply analyzing various characteristics of the area in the analysis process. In one aspect, the layer structure within the region is parsed to determine relationships and hierarchies between different layers. On the other hand, object attributes are analyzed, including characteristics of type, size, color, etc. of the design element. At the same time, the operation content, namely the possible editing operation type of the user, is considered. Through comprehensive analysis of these aspects, the edit direct target layer can be accurately identified. This layer is where the object directly performs the editing operation is located, and contains a specific design element or graphic part. Meanwhile, an effect recognition collaborative layer is determined, wherein the layer is a layer associated with the editing direct target layer, and the effect recognition collaborative layer can reflect the influence of editing operation on other parts in the editing process. For example, if the editing direct target layer is modified, the effect recognition collaborative layer can display the effects of color change, adjustment of spatial position and the like, thereby helping the user to better evaluate the influence range and effect of the editing operation.
Step 200, respectively generating a layer locking instruction and a collaborative display instruction according to the editing direct target layer and the effect identification collaborative layer, wherein the layer locking instruction is used for locking the editing direct target layer, and the collaborative display instruction is used for carrying out editing synchronous checking operation on the effect identification collaborative layer.
Specifically, a layer lock instruction is generated based on editing a direct target layer. In the scene of editing the same CAD drawing by multiple persons in a cooperative manner, in order to avoid conflict, each user locks the direct editing target layer of the part to be edited by the user by using a layer locking instruction after determining the direct editing target layer. This ensures that only one user can modify a particular layer or element at the same time, while the other users cannot modify the locked portion, but can view its state, knowing the current editing progress, the mechanism effectively prevents confusion and errors caused by the simultaneous editing of the same part by a plurality of users, and ensures the ordering of the editing process. After the current user finishes editing or actively unlocking, other users can operate the part, so that the high efficiency and accuracy of multi-user cooperation are realized.
Secondly, generating a collaborative display instruction according to the effect identification collaborative layer, wherein in the drawing editing process, close relation exists between different layers, editing of one layer possibly affects other layers, and when the edited layers and other layers have correlations and mutual influences, the collaborative display instruction plays an important role. Through the instruction, the user can synchronously check the change condition of the effect identification collaborative layer in the editing process, so that the user can fully consider the function collaboration with other layers and the matching of parameters such as the size and the like when editing the drawing. For example, when a certain layer is subjected to size adjustment, the collaborative layers can be identified by synchronously checking the effects, and whether the adjustment can affect the matching effects of other layers can be timely found, so that corresponding adjustment is performed, and the integrity and coordination of the whole drawing are ensured. The collaborative display mechanism greatly improves the quality and efficiency of drawing editing, so that the collaborative editing of multiple persons is smoother and more efficient.
And step 300, editing the drawing based on the editing direct target layer, performing editing effect identification evaluation through the effect identification cooperative layer, and outputting editing comments, wherein the editing comments can be cooperatively displayed through the cooperative display instruction.
Specifically, drawing editing is performed based on editing a direct target layer, focusing attention on this specific layer, and various specific editing operations such as adding new graphic elements, modifying the shapes of existing elements, adjusting colors, and the like are performed. Since the layer is already locked in step S200, it is ensured that it is not disturbed by other users during editing, so that the required editing task can be efficiently completed.
Meanwhile, the editing effect identification evaluation is carried out through the effect identification collaborative layer, the effect identification collaborative layer is closely related to the editing direct target layer, when the editing direct target layer is edited, the effect identification collaborative layer correspondingly changes, the effect of the editing operation is evaluated according to the changes, and whether the expected design target is reached is judged. For example, if one graphic is resized on the edit direct object layer, it is observed in the effect recognition collaborative layer whether the position or scale of the other graphic associated therewith is affected. In this way, problems that may occur during editing are discovered in time for adjustment and optimization.
Finally, editing annotations are output, are records and feedback of editing processes and effects, comprise explanation of editing operations, problems encountered, proposed improvement suggestions and the like, and can be displayed cooperatively through cooperative display instructions, so that in the environment of multi-person cooperative editing, all users participating in editing can see the annotations, the progress and the condition of editing can be better known, communication and discussion can be carried out, the problems are jointly solved, and the quality and the efficiency of drawing editing are improved. Meanwhile, collaborative display annotation is also helpful for maintaining team collaborative consistency, so that better editing effect and final design objective are realized.
In one possible implementation, step S300 further includes:
step S310, obtaining the editing operation type.
Step S320, based on the editing operation type, acquiring a multi-operation function edge node, wherein the multi-operation function edge node is an editor constructed according to the functional characteristics of different CAD editing tools.
And S330, selecting a target function from the multi-operation function edge node, connecting the multi-operation function edge node, loading an editing operation task into the multi-operation function edge node for drawing editing operation, and returning an editing completion result to the cloud.
In particular, the acquisition of the edit manipulation type is a key link. Different CAD tools have differences in functions, some have advantages in drawing annotation, can provide accurate and rich annotation functions, and some are more excellent in processing of design elements, and have more diversified design element libraries and powerful processing capacity. In view of the differences, corresponding edge editing nodes are built for different tool types, and rich choices are provided for cloud users. The cloud user integrates multiple types of function processing nodes and flexibly selects according to specific editing requirements. For example, when a user needs to annotate drawings in detail, an edge editing node with a powerful annotation function is selected, and when the design elements are needed to be enriched, the node with advantages in the aspect of design element processing can be switched to, so that the user can fully utilize the characteristics of different CAD tools, the efficiency and quality of drawing editing are improved, and various complex editing requirements are met.
The method has the advantages that the multi-operation functional edge node is obtained based on the determined editing operation type, and because different CAD editing tools have unique functional characteristics, the multi-operation functional edge node is constructed to meet diversified editing requirements. These edge nodes are editors created based on the advantages and features of different CAD tools. For example, some CAD tools perform well in graphics rendering, can provide high precision drawing functionality and rich drawing tool options, and edge nodes built for such tools can play an important role in graphics rendering-related editing operations. While other CAD tools have expertise in dimension labeling, graph management, or three-dimensional modeling, the corresponding edge nodes also exhibit powerful functions in the corresponding editing operation types. In this way, the functional advantages of different CAD editing tools are integrated into the multi-operation functional edge node, so that richer and flexible choices are provided for users when editing drawings. After the user determines the specific editing operation type, the nodes which are most suitable for the current task are quickly screened from the multi-operation functional edge nodes, so that the user can finish the drawing editing work in the most efficient mode.
The user selects the target function among a plurality of multi-operational function edge nodes, and as the edge nodes are constructed according to the functional characteristics of different CAD editing tools, the selection is performed according to specific requirements and usage habits. For example, if the user is performing a complex graphics rendering task, an edge node that is powerful in graphics rendering is selected. Once the target function is determined, the matching operational function edge node is connected, and this connection ensures that the editing operation is performed accurately in the appropriate environment. And then loading the editing operation task into the matched operation function edge node to carry out drawing editing operation, and at the stage, the edge node carries out efficient and accurate editing processing on the drawing by utilizing the specific function and advantage of the edge node. Whether the modification of the graph, the addition of the labels or the adjustment of the layers, etc., can be properly handled in this special edge node. After the editing operation is finished, the result is returned to the cloud end, the user edits on different devices by using different tools, and the data synchronization and collaborative work among different devices are realized by returning the editing completion result to the cloud end. All users participating in editing can view the edited result in real time at the cloud end for further discussion and adjustment. Meanwhile, the storage and management functions of the cloud end ensure the safety and accessibility of data, so that a user can continue editing work at any time and any place conveniently, and the working efficiency and the cooperation capability are greatly improved.
In one possible implementation, step S320 further includes:
And S321, collecting multiple types of CAD tools, carrying out drawing function identification evaluation on the multiple types of CAD tools, and screening tools with target drawing function evaluation results meeting preset requirements.
Step S322, respectively constructing edge editing nodes for each screening target tool.
Step S323, carrying out functional characteristic analysis on the screening target tool, generating a functional analysis label, establishing mapping association between a target tool name and the functional analysis label, and adding the mapping association label into the edge editing node to carry out operation function description.
In particular, the collection of multiple types of CAD tools is first performed, which means that a wide variety of different CAD software tools are collected, from different developers, with different functional features and application scenarios. And carrying out drawing function identification evaluation on the multi-type CAD tools. In this evaluation process, the drawing function of each tool is analyzed in depth from a plurality of angles. For example, consider the drawing accuracy of the tool to see if it can accurately draw complex graphic details, evaluate the drawing speed to determine the efficiency in processing large amounts of graphic data, check if the graphic types supported by the tool are rich and diverse, and meet different design requirements, and also pay attention to whether the tool has some special drawing assistance functions such as auto-alignment, intelligent capture, etc. Through the comprehensive evaluation indexes, a relatively accurate understanding of the drawing function of each CAD tool is provided. And screening the evaluation results according to preset requirements, wherein the preset requirements are determined according to specific drawing editing project requirements, for example, for an engineering drawing editing task with extremely high precision requirements, CAD tools with high drawing precision and accurate labeling function are screened out. Through a strict screening process, the finally left tool can meet the requirements of specific projects in the aspect of drawing functions, and reliable tool support is provided for subsequent drawing editing work.
For the target tools screened in step S321, edge editing nodes are respectively constructed, and for each screened target tool meeting the preset requirement, edge editing nodes are constructed. The edge editing node is custom designed according to the characteristics and functions of the specific tool, and the function of the edge editing node is to integrate and utilize the advantages of the tool better so that the functions of the edge editing node can be called efficiently in the process of editing drawings. For example, if a screening target tool performs well in terms of speed and precision of drawing graphics, the edge editing nodes built for it optimize data transmission and processing flows, ensure quick response when drawing operations are performed using the tool, and accurately integrate the graphics drawn by the tool into the entire drawing. Meanwhile, the edge editing node designs a corresponding interface and an interaction mode according to the operation interface and the command system of the tool, so that a user can edit the tool more conveniently and smoothly. By respectively constructing edge editing nodes for each screening target tool, the advantages of different tools are fully exerted, richer and efficient tool selection is provided for drawing editing, and various complex editing requirements are met.
Analyzing the functional characteristics of the screened target tool, analyzing the functional performance of the tool in different aspects, for example, whether the tool can realize high-precision line drawing in the aspect of graphic drawing, whether the tool supports the importing and exporting of various graphic formats, whether the tool has convenient undoing and redoing functions in the editing operation, whether the tool can quickly manage the graphic layers, and the like. Through detailed analysis of these functional features, the unique advantages and applicable scenarios of each target tool are accurately grasped. Functional analysis labels are generated according to the functional characteristic analysis result, and the labels are brief summary and description of the functions of the target tool, such as a high-precision drawing tool, a high-efficiency layer management tool, a multi-format compatible tool and the like, and each label can intuitively reflect an important functional characteristic of the tool. Then, a mapping association of the target tool name and the function analysis tag is established, so that in the subsequent use process, the corresponding function description can be quickly found through the tool name, and a user can conveniently select a proper tool. For example, when a user sees a tool name, the user can immediately learn the main functional features of the tool by mapping the association to determine whether it is suitable for the current editing task. Finally, the mapping association label is added to the edge editing node to carry out operation function description, so that in the edge editing node, a user can not only see the name of the tool, but also know the specific function and advantage of the tool through the function description label, and the user can know the function of the tool corresponding to each node more accurately when selecting the edge editing node, and drawing editing operation is carried out more efficiently. Through the series of steps, the clear presentation and effective utilization of the functional characteristics of the screening target tool are realized, and a more convenient and efficient drawing editing environment is provided for users.
In one possible implementation, as shown in fig. 2, step S330 further includes:
and step S331, obtaining load operation characteristics of the multi-operation function edge node, wherein the load operation characteristics are used for representing the current load processing condition of the edge node.
And S332, analyzing the loading processing load characteristics of the cloud drawing based on the editing operation task, wherein the loading processing load characteristics are used for representing the processing parameter characteristics of the current task requirement.
And S333, carrying out load efficiency evaluation by utilizing the load processing load characteristics and the load operation characteristics to obtain a load evaluation result.
And step 334, taking the loaded evaluation result as node selection feedback information, enabling a cloud editing user to select a target function, and determining the edge node of the matching operation function.
In particular, it is significant to obtain load operation characteristics of the multi-operation functional edge node, which are key indicators for knowing the current working state of the edge node. The load-running feature includes a number of aspects, firstly, concerning the number of tasks being processed, if an edge node is simultaneously processing multiple editing tasks, then its load is relatively heavy, knowing this feature determines whether the node has enough resources to take on a new task, secondly, the progress of processing a task is also an important load-running feature, if the task progress of a node is slow, meaning that it encounters some difficulty or resource shortage in processing the current task, which also affects its ability to accept new tasks. In addition, the resource occupancy rate is also one of the key load operation features, including CPU utilization rate, memory occupancy condition, use proportion of storage resources, and the like, and a high resource occupancy rate generally means that the load of the node is heavy, and can affect the speed and efficiency of processing new tasks. The current load processing condition of the edge nodes is mastered in real time by acquiring the load operation characteristics, and task allocation and scheduling are carried out according to the information, so that the load of each edge node is relatively balanced, and the influence of excessive load of certain nodes on the performance of the whole drawing editing system is avoided. Meanwhile, when the user selects the edge node to edit, the user can select the node with lighter load and stronger processing capability by referring to the load operation characteristics, so that the editing efficiency of the user is improved, and the waiting time is reduced.
When facing an editing operation task, the cloud drawing is deeply analyzed to determine loading processing load characteristics of the cloud drawing, and the load characteristics can accurately reflect the specific requirements of the current task on processing resources. First, the complexity of the task is an important consideration, and if the editing operation involves a large number of complex graphics drawing, modification, or complex layer operations, the processing power requirements are relatively high. For example, performing high-precision three-dimensional model editing requires great computational power and graphics processing power to ensure smooth operation and quick response. Second, the amount of data involved can also significantly impact the load characteristics of the loading process, requiring more memory and storage resources during loading and processing if the cloud drawing contains a large number of graphical elements, high resolution images, or complex geometries. At the same time, the speed of data transfer and processing is also affected, requiring faster network connections and more efficient data management mechanisms. Furthermore, the need for specific process parameters may also be part of the load profile. For example, certain tasks require specific algorithmic support, such as complex geometric transformation algorithms or intelligent pattern recognition algorithms, whose execution requires specific hardware resources or computational power, thereby affecting the overall task's processing load. The loading processing load characteristics of the cloud drawing are analyzed, the requirement of the current task is better known, an important basis is provided for the subsequent selection of the appropriate multi-operation function edge node, the efficient implementation of editing operation under the condition of meeting the resource requirement is ensured, and the quality and efficiency of the whole drawing editing process are improved.
The load efficiency evaluation by using the load processing load characteristic and the load operation characteristic is a key decision process. Firstly, comparing and analyzing the load processing load characteristic of the current task with the load operation characteristic of each multi-operation functional edge node, if the load operation characteristic of one edge node shows that the resource occupancy rate is high, and the load processing load characteristic of the current task shows that a large amount of resources are needed, the load efficiency of the node when processing the task is lower. Conversely, if an edge node is lightly loaded and its processing power matches the task's requirements, it may have a higher loading efficiency in processing the task. In the evaluation process, a number of factors need to be considered. In one aspect, it is evaluated whether the edge node has sufficient remaining resources to assume the current task, including CPU processing power, memory space, storage capacity, etc. On the other hand, the urgency and time requirements of the task are also considered. If a task needs to be completed in a short time, then it is necessary to select those edge nodes that can respond quickly and process the task efficiently. By comprehensively considering these factors, a loading evaluation result is obtained. The result may be represented by a numerical value or may be graded, such as high, medium, or low efficiency. The loading of the evaluation results provides an important basis for subsequent decisions, whether the tasks are automatically assigned or the user manually selects the edge nodes, a more intelligent choice can be made according to the results, so that the drawing editing task can be completed with the highest efficiency.
The loaded evaluation result is used as node selection feedback information, an important decision basis is provided for a cloud end editing user, when the cloud end editing user is ready to carry out drawing editing operation, a plurality of multi-operation function edge nodes are faced to be selected, and the loaded evaluation result helps the user to find a target function most suitable for a current task in the nodes. The loading evaluation result presents the efficiency performance of each edge node in a clear and easily understood manner when processing a specific task, and if the loading evaluation result of one node is displayed as high efficiency, the loading evaluation result means that the loading evaluation result is excellent in aspects of resource utilization, processing speed and the like, and the current task can be loaded and processed quickly. Based on this feedback information, the user may prioritize the selection of such nodes for editing operations. Meanwhile, the user can be helped to avoid selecting nodes which cause task delay or processing difficulty by loading evaluation results. For example, if the evaluation of a node is inefficient, the user may know that the node may not be suitable for the current task, thereby selecting other more suitable nodes. By taking the loaded evaluation result as node selection feedback information, the user can more accurately determine the edge node of the matching operation function, so that the editing efficiency of the user is improved, unnecessary waiting time and error selection are reduced, and the smooth progress of the whole drawing editing process is ensured. The user makes an intelligent decision according to the task demands and the time demands and the loading evaluation results, and selects the edge node which can best meet the demands so as to realize efficient and high-quality drawing editing.
In one possible implementation, step S100 further includes:
And S110, carrying out region frame selection on the drawing according to an editing target and operation content, wherein the region frame selection target comprises a specific layer and design elements, and determining the editing target region.
And step S120, analyzing the layer structure, the object attribute and the operation content of the editing target area, and determining core operation parameters.
And step S130, performing direct editing object identification on the editing target area according to the core operation parameters to obtain the editing direct target layer.
And step 140, according to the core operation parameters, performing influence analysis on the editing direct target layer and other layers of the connection space to obtain the effect identification collaborative layer, wherein the effect identification collaborative layer is layer information having editing influence relation with the editing direct target layer under the editing of the core operation parameters.
Specifically, first, in order to determine an editing target area, it is necessary to perform area framing of a drawing according to an editing target and operation contents, and a user achieves this target in various ways. On the one hand, manual frame selection allows a user to select an area to be edited directly through a mouse or a touch screen, and the mode gives the user maximum flexibility and intuitiveness, and a specific area is accurately selected according to own visual judgment. For example, when the user wants to make a fine edit on a small area in the drawing, this area is precisely locked by manual box selection. On the other hand, the condition selection automatically screens according to the specific layers or design elements in the drawing, for example, if the user wants to edit the electrical layers or select the specific design elements such as walls, doors and windows, etc., the corresponding areas are automatically screened according to the conditions, which greatly improves the selection efficiency, especially when the drawing is very complex and the manual searching of the specific elements is very time-consuming. In addition, parameterized selection automatically frames the regions meeting the conditions based on preset conditions. For example, conditions such as specific dimensions, material properties, or drawing notes may be used as a basis for filtering, and if a user wants to find all elements of a specific size or parts with specific material properties, parameterized choices may quickly lock these areas. The process of the region frame selection has definite target, namely, the region which the user wants to operate is precisely locked, and the accuracy of subsequent editing is ensured.
The editing target area is analyzed in multiple aspects to determine core operation parameters, firstly, the layer structure analysis is performed, the editing target area comprises a plurality of layers, and each layer has specific functions and roles. By further analyzing the layer structure, the hierarchical relationship between different layers is defined, for example, some layers are basic layout layers and bear the whole frame structure, some layers are decorative or labeling layers, knowledge of the hierarchical relationship is helpful to determine which layers are key design layers in the editing process, directly influence the whole design effect, and which layers are auxiliary, such as labeling layers or reference layers, and are mainly used for providing additional information. At the same time, it is also important to analyze the connection between layers, for example, elements of some layers have a relationship with elements of other layers, and this relationship affects the scope and extent of editing operations. Then object attribute analysis is performed, and each design object has specific attributes in the editing target area, wherein the attributes comprise physical attributes such as wall materials, thicknesses, equipment sizes, power and the like, position attributes such as coordinate positions of the objects in a drawing, and other specific attributes such as colors, textures and the like. And analyzing the object attributes to provide specific basis for subsequent editing operation. For example, if a wall is to be edited, knowledge of its material properties can help determine the appropriate editing mode, whether to modify thickness, change material, adjust position, etc. And finally, analyzing the operation content. And identifying corresponding editing tools and operation modes according to specific operation requirements of users, such as modification, deletion, addition or adjustment. Different operating contents require different tools and methods to implement. If the user selects the modification operation, a determination is made as to which specific attribute is modified, whether it is size, location or other attribute, and corresponding parameterized operation options are prepared. If an add operation, the type and location of the added object is determined and an appropriate add tool is provided. The method comprises the steps of determining core operation parameters including operation types, determining main operation intentions of a user through comprehensive analysis of layer structures, object attributes and operation contents, determining object types of specific layers needing to be operated by target layers, determining object types needing to be operated, and adjusting attributes, wherein the attribute changes involved in the operation process are covered, and the core operation parameters provide accurate guidance for subsequent editing operation, so that efficient and accurate editing process is ensured.
The core operating parameters provide an important basis for identifying direct editing objects. First, the operation type determines the editing property, such as modification, addition or deletion, and if the editing operation is a modification operation, it needs to determine which objects are direct targets to be modified, and the target layer information further reduces the search scope, and explicitly searches for direct editing objects in which specific layers, where object type parameters help to quickly screen out design objects conforming to a specific type, such as walls, devices or pipelines. In the identification process, each layer and object in the editing target area are carefully analyzed, and the objects matched with the core operation parameters are marked as direct editing objects, and the layers where the objects are located are the editing direct target layers. For example, if the core operating parameters indicate that a particular type of device needs to be modified, objects conforming to that device type are searched for in the entire edit target area and the layer in which they reside is determined. After the direct target layer is obtained, the subsequent editing operation can be concentrated on the specific layer, so that the editing efficiency is improved, the editing accuracy is ensured, and unnecessary interference to other irrelevant layers and objects is avoided. Meanwhile, the direct target layer is clearly edited, and specific objects and ranges are provided for other related operations, such as layer locking, collaborative display and the like, so that the whole drawing editing process is more orderly and efficient.
This step of impact analysis is critical when editing cloud drawings. After identifying the edit direct target layer, the relationship between the layer and other layers of the connection space is thoroughly explored. Other layers of the connection space cover various layers which are logically related or physically connected with the direct target layer, and when a specific wall body is determined to edit the direct target layer, the electric pipeline layer, the door and window position layer and the like related to the specific wall body form other layers of the connection space by taking building design as an example. Because in practical architectural design, the change of the wall body often affects the layout of the electric pipeline, for example, the movement of the wall body causes the electric pipeline to need to re-plan a path, and meanwhile, the modification of the wall body may affect the position of the door and window, for example, the thickening of the wall body needs to adjust the relative positions of the door and window and the wall body. When the influence analysis is carried out, the potential influence of the modification of the direct target layer on other layers is judged according to the core operation parameters, and if the core operation parameters indicate that the size of the wall body is to be modified, a series of chain reactions caused by the modification are considered. For example, the change of the size can lead the size information in the labeling layer to be synchronously updated, otherwise, the situation that the labeling is not in conformity with the reality can occur, and meanwhile, the design position of the door and window also needs to be correspondingly adjusted so as to ensure that the door and window is matched with the modified wall body. By such analysis, layer information having an editing influence relationship with the editing direct target layer under the core operation parameter editing is determined, and the layers form an effect recognition cooperative layer. The method has the advantages that the effect identification collaborative layers are possessed, and interaction among the layers can be considered more comprehensively when a user performs editing operation, so that the condition that the whole design is inconsistent or wrong due to local editing is avoided. Meanwhile, the method provides a key basis for subsequent collaborative display and editing synchronous viewing, so that a user can know the change conditions of other related layers in real time in the editing process, and accordingly adjustment can be made in time, and the integrity and coordination of the whole drawing are ensured.
In one possible implementation, step S140 further includes:
and step S141, carrying out hierarchical relationship and spatial distribution relationship identification based on the editing direct target layer to obtain a relationship layer.
And S142, carrying out relevance analysis on each relation layer according to the core operation parameters, determining the relevance among the layers, and establishing a layer relevance matrix.
And S143, carrying out hierarchical and spatial connection influence relation analysis on the layer association matrix based on the influence result of the core operation parameters on the editing direct target layer to obtain operation influence.
Specifically, hierarchical relationship and spatial distribution relationship recognition are performed based on editing the direct target layer, and a relationship layer is obtained. First, hierarchical relationship identification means determining the position of an edit direct target layer in the layer structure of the entire drawing. For example, it may be below some important design layer or have a hierarchical relationship with some auxiliary layers. Meanwhile, the spatial distribution relation identification focuses on editing the position relation between the direct target layer and other layers in the spatial dimension. For example, if a building drawing is a direct target layer is edited to represent a certain building structure that may spatially have adjacent, overlapping or interacting relationships with electrical facilities, piping arrangements, etc. in other layers. Through such a recognition process, a relationship layer is found that is hierarchically and spatially closely related to the editing direct target layer.
And carrying out relevance analysis on each relation layer according to the core operation parameters, determining the relevance among the layers, establishing a layer relevance matrix, and providing specific directions and basis for the relevance analysis by the core operation parameters. For example, if the core operating parameters indicate a size modification to the edit direct object layer, then for each relationship layer, the extent to which such size modification may have an effect is analyzed. The association degree is represented by a numerical value, the higher the numerical value is, the closer the association between two layers is, and by carrying out such analysis on all the relationship layers, a layer association matrix is established, which clearly shows the association degree between each layer, and provides important data support for subsequent analysis and decision.
And carrying out hierarchical and spatial connection influence relation analysis on the layer association matrix based on the influence result of the core operation parameters on the edited direct target layer to obtain operation influence. In this step, the specific effect of the core operating parameters on editing the direct target layer, such as dimensional changes, position movements, etc., is first considered. The propagation effect of this effect on other relationship layers on the hierarchical and spatial connections is then analyzed in conjunction with the layer association matrix. For example, if the size of the edited direct target layer is increased, by analyzing the layer association matrix, it is determined which relationship layers are directly affected, e.g., adjacent layers may need to be adjusted accordingly to accommodate the change, and at the same time, it may be determined which layers are indirectly affected, e.g., layers affecting other regions through spatial connection relationships. Through the analysis, the influence of the editing operation on each layer in the whole drawing is comprehensively known, accurate reference is provided for a user when the editing operation is performed, and the overall harmony and the integrity of the drawing are ensured.
In one possible implementation, step S143 further includes:
step S1431, calculating the spatial relationship before and after editing according to the editing result of the core operation parameters on the editing direct target layer.
And step S1432, obtaining the association degree between the layers according to the layer association matrix.
And step S1433, carrying out influence analysis on each relation layer in the layer association matrix according to the spatial relation before and after editing and the association degree and combining with the initial spatial distance between the layers to obtain the operation influence.
Specifically, calculating the spatial relationship before and after editing on the editing result of editing the direct target layer according to the core operation parameters is a key step. First, the type of editing operation indicated by the core operation parameters is specified, for example, operations such as translation, scaling, rotation, or shape change, which directly affect the position and form of the editing direct target layer in space. If the translation operation is performed, the displacement in the horizontal direction and the vertical direction needs to be determined, and the spatial displacement values in the two directions are calculated by comparing the coordinate changes of specific reference points (such as a center point or a certain corner point) on the target layers before and after editing, so that the spatial relationship before and after editing is obtained. For the scaling operation, the scale factor is calculated by comparing the bounding box sizes of the target layers before and after editing or the sizes of specific key elements, focusing on the size change ratio of the target layers in each dimension, thereby describing the change in size of the spatial relationship before and after editing. In the case of a rotation operation, it is necessary to determine the angle of rotation as well as the center of rotation. The change in angle and position of the spatial relationship before and after editing is accurately determined by calculating the angular change of the elements on the target layer with respect to the center of rotation and observing the spatial position change of these elements before and after rotation. For shape changing operation, complex geometric transformation is involved, and the shape change condition of each element in the target layer, such as the change of the bending degree of the line, the increase, decrease or deformation of the graph, and the like, needs to be analyzed, and the complex change of the spatial relationship before and after editing in terms of shape is determined by quantitatively analyzing the shape changes, calculating the perimeter and area changes of the graph, or the curvature changes of the line, and the like. By accurately calculating the spatial relation change caused by different editing operation types, a foundation is laid for further analyzing the influence of the editing operation on other layers.
Obtaining the degree of association between layers according to the layer association matrix is a straightforward and critical operation. The layer association matrix (R) presents the degree of association between different layers in a clear manner, with each element R ij in the matrix corresponding to the degree of association between a particular two layers T i and T j. For example, when the degree of association between the wall layer (T wall) and the door and window layer (T window-door) needs to be obtained, the corresponding element in the matrix is directly checked, and the value of the corresponding element is 0.8, which indicates that the degree of association between the wall and the door and window layer is higher. Similarly, if the degree of correlation between the wall layer and the electrical conduit layer (T electric) is to be understood, looking at the value in the matrix to be 0.5 shows that the degree of correlation between them is relatively low. For the association degree between any two other layers, the corresponding element values can be quickly and accurately acquired in the layer association matrix in the mode, so that the association degree between the element values is clear, the matrix-based acquisition mode provides an important data basis for subsequent influence analysis, and the influence of editing operation on different layers can be more accurately estimated.
When the influence analysis is performed to obtain the operation influence, three key factors of the spatial relationship before and after editing, the association degree and the initial spatial distance between the layers need to be comprehensively considered. First, the spatial relationship before and after editing reflects the change in the position, size, shape, or the like of the editing direct target layer after the editing operation. For example, if the editing operation causes the target layer to translate, its relative position in space with other layers changes, and if the editing operation causes the scaling operation, the dimensional change of the target layer affects its spatial interaction relationship with other layers, and the change of the spatial relationship is one of important factors affecting other layers. And secondly, the association degree reflects the inter-dependency degree between the layers based on the hierarchical relationship, design logic or function, and the association degree value obtained from the layer association matrix directly indicates the tightness degree between the target layer and each relationship layer, and the higher the association degree is, the greater the potential influence on the corresponding relationship layer is when the target layer is edited. Finally, the initial spatial distance between layers also plays an important role in the impact analysis, even if there is some degree of correlation between two layers, but if their initial spatial distance is far, then the impact of editing operations on them may be relatively small. For example, in a building drawing, two layers located on different floors and having relatively independent functions, although there may be a certain degree of correlation, editing a layer of one layer may have limited impact on the corresponding layer of the other layer due to the large initial spatial distance. Taking these three factors into consideration, a specific calculation formula is utilizedAnd carrying out influence analysis on each relation layer in the layer association matrix. By means of the analysis, the influence degree of editing operation on each relation layer, namely operation influence, can be accurately obtained, and the accuracy and coordination of the drawing editing process are ensured.
In one possible implementation, step S1433 further includes:
Wherein, the method comprises the steps of, wherein, Is the influence of the target layer i on the relation layer j,For editing the spatial relationship between the front and the back, namely the spatial displacement of the target layer,Is the dimensional change of the target layer,Is the association degree between the target layer i and the relation layer j,Is the initial spatial distance between the target layer i and the relation layer j,For adjusting the coefficients, for adjusting the intensity of the influence.
In particular, the method comprises the steps of,Representing the influence of the target layer i on the relationship layer j.The spatial relationship between the target layer before editing and the target layer after editing, in particular to the spatial displacement of the target layer, which reflects the change condition of the position of the target layer in the editing operation process.The change of the physical size of the target layer at the time of editing is reflected as the size change of the target layer.The relationship between the target layer i and the relationship layer j is based on the hierarchical relationship, design logic or functional interdependence of the layers, the value range of the relationship ranges from 0 (no relationship) to 1 (strong relationship), and the degree of tightness of the internal relationship between the two layers is disclosed.Is the initial spatial distance between the target layer i and the relationship layer j, this parameter takes into account the original spatial positional relationship between the layers, since the distance will have an impact on the impact.The adjusting coefficient is used for adjusting the influence intensity, and can be set according to factors such as specific drawing types, editing requirements and the like.
The formula can capture the common influence of the change of the position and the physical size on the associated layers simultaneously by introducing the product of the space displacement and the size change, which means that the change of the position and the size of the target layer caused by the editing operation can affect the associated layers comprehensively. Meanwhile, by introducing an initial distance, the formula can adjust different influences of the space displacement on the far and near layers. I.e. layers closer to each other are more affected by spatial displacement of the target layer, while layers farther from each other are less affected. In addition, the association degree combines with the spatial variation, so that the modified influence and the internal relation and physical position between the layers dynamically interact, and more accurate influence assessment is ensured. The dynamic interaction shows that the internal association relation between the layers is fully considered while the physical position change of the layers is considered, so that the influence calculation is more accurate and comprehensive, and the influence degree of the editing operation on each layer in the whole drawing can be better reflected.
It should be noted that the sequence of the embodiments of the present application is only for description, and does not represent the advantages and disadvantages of the embodiments. And the foregoing description has been directed to specific embodiments of this specification. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
The foregoing description of the preferred embodiments of the application is not intended to limit the application to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and scope of the application are intended to be included within the scope of the application.
The specification and figures are merely exemplary illustrations of the present application and are considered to cover any and all modifications, variations, combinations, or equivalents that fall within the scope of the application. It will be apparent to those skilled in the art that various modifications and variations can be made to the present application without departing from the scope of the application. Thus, the present application is intended to include such modifications and alterations insofar as they come within the scope of the application or the equivalents thereof.

Claims (6)

1. The intelligent cloud drawing editing method is characterized by comprising the following steps of:
setting an editing target area of the drawing through a cloud, and carrying out target analysis according to the editing target area to obtain an editing direct target layer and an effect recognition collaborative layer;
respectively generating a layer locking instruction and a cooperative display instruction according to the editing direct target layer and the effect identification cooperative layer, wherein the layer locking instruction is used for locking the editing direct target layer, and the cooperative display instruction is used for performing editing synchronous viewing operation on the effect identification cooperative layer;
editing the drawing based on the editing direct target layer, performing editing effect identification evaluation through the effect identification cooperative layer, and outputting editing comments, wherein the editing comments can be cooperatively displayed through the cooperative display instruction;
Editing the drawing based on the editing direct target layer, including:
Acquiring an editing operation type;
Acquiring a multi-operation functional edge node based on the editing operation type, wherein the multi-operation functional edge node is an editor constructed according to the functional characteristics of different CAD editing tools;
Selecting a target function from the multi-operation function edge node, connecting a matching operation function edge node, loading an editing operation task into the matching operation function edge node for drawing editing operation, and returning an editing completion result to a cloud;
performing target analysis according to the editing target area to obtain an editing direct target layer and an effect recognition collaborative layer, wherein the method comprises the following steps:
Carrying out region frame selection on the drawing according to an editing target and operation content, wherein the target for region frame selection comprises a specific layer and design elements, and determining the editing target region;
analyzing the layer structure, object attribute and operation content of the editing target area, and determining core operation parameters;
performing direct editing object identification on the editing target area according to the core operation parameters to obtain the editing direct target layer;
And according to the core operation parameters, performing influence analysis on the editing direct target layer and other layers of the connection space to obtain the effect identification cooperative layer, wherein the effect identification cooperative layer is layer information which has editing influence relation with the editing direct target layer under the editing of the core operation parameters.
2. The intelligent cloud drawing editing method as claimed in claim 1, wherein based on the editing operation type, acquiring the multi-operation function edge node comprises the following steps:
Collecting multiple types of CAD tools, carrying out drawing function identification evaluation on the multiple types of CAD tools, and screening tools with target drawing function evaluation results meeting preset requirements;
respectively constructing edge editing nodes aiming at each screening target tool;
And carrying out functional characteristic analysis on the screening target tool, generating a functional analysis tag, establishing mapping association between a target tool name and the functional analysis tag, and adding the mapping association tag into the edge editing node to carry out operation function description.
3. The intelligent editing method of cloud drawing according to claim 1, wherein selecting a target function in the multi-operation function edge node, connecting a matching operation function edge node, comprises:
Acquiring load operation characteristics of the multi-operation functional edge node, wherein the load operation characteristics are used for representing the current load processing condition of the edge node;
Analyzing loading processing load characteristics of the cloud drawing based on the editing operation task, wherein the loading processing load characteristics are used for representing processing parameter characteristics of current task demands;
carrying out load efficiency evaluation by utilizing the load processing load characteristics and the load operation characteristics to obtain a load evaluation result;
and taking the loaded evaluation result as node selection feedback information, enabling a cloud editing user to select a target function, and determining the edge node of the matching operation function.
4. The intelligent cloud drawing editing method according to claim 1, wherein the performing the influence analysis on the editing direct target layer and other layers of the connection space according to the core operation parameters comprises:
performing hierarchical relationship and spatial distribution relationship identification based on the editing direct target layer to obtain a relationship layer;
Carrying out association analysis on each relation layer according to the core operation parameters, determining the association degree between each layer, and establishing a layer association matrix;
And carrying out hierarchical and spatial connection influence relation analysis on the layer association matrix based on the influence result of the core operation parameters on the editing direct target layer to obtain operation influence.
5. The intelligent cloud drawing editing method as claimed in claim 4, wherein performing hierarchical, spatial connection influence relation analysis on the layer association matrix based on the influence result of the core operation parameters on the editing direct target layer to obtain operation influence comprises:
calculating the spatial relationship before and after editing according to the editing result of the core operation parameters on the editing direct target layer;
acquiring the association degree between the layers according to the layer association matrix;
And according to the spatial relationship before and after editing and the association degree, carrying out influence analysis on each relationship layer in the layer association matrix by combining with the initial spatial distance between the layers to obtain the operation influence.
6. The intelligent cloud drawing editing method as claimed in claim 5, wherein the calculation formula expression of the influence analysis is: Wherein, the method comprises the steps of, wherein, Is the influence of the target layer i on the relation layer j,For editing the spatial relationship between the front and the back, namely the spatial displacement of the target layer,Is the dimensional change of the target layer,Is the association degree between the target layer i and the relation layer j,Is the initial spatial distance between the target layer i and the relation layer j,For adjusting the coefficients, for adjusting the intensity of the influence.
CN202411784926.4A 2024-12-06 2024-12-06 Cloud drawing intelligent editing method Active CN119272390B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411784926.4A CN119272390B (en) 2024-12-06 2024-12-06 Cloud drawing intelligent editing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411784926.4A CN119272390B (en) 2024-12-06 2024-12-06 Cloud drawing intelligent editing method

Publications (2)

Publication Number Publication Date
CN119272390A CN119272390A (en) 2025-01-07
CN119272390B true CN119272390B (en) 2025-03-04

Family

ID=94118094

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411784926.4A Active CN119272390B (en) 2024-12-06 2024-12-06 Cloud drawing intelligent editing method

Country Status (1)

Country Link
CN (1) CN119272390B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110163555A (en) * 2019-04-11 2019-08-23 厦门亿力吉奥信息科技有限公司 Collaboration drawing management method, storage medium
CN110929310A (en) * 2019-11-25 2020-03-27 杭州群核信息技术有限公司 Cloud drawing intelligent generation and editing method

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9779184B2 (en) * 2013-03-15 2017-10-03 Brigham Young University Scalable multi-user CAD system and apparatus
US20190272071A1 (en) * 2018-03-02 2019-09-05 International Business Machines Corporation Automatic generation of a hierarchically layered collaboratively edited document view
CN112214622A (en) * 2020-10-22 2021-01-12 兰居(北京)数字科技有限公司 Data processing method and device for rapidly displaying AutoCAD drawing
CN113591197B (en) * 2021-09-26 2022-01-18 深圳须弥云图空间科技有限公司 Online editing method and device, electronic equipment and storage medium
CN113868891B (en) * 2021-10-29 2024-09-03 无锡图智科技有限公司 CAD-based collaborative design platform and application method thereof
CN114169027A (en) * 2021-12-16 2022-03-11 福建永福信息科技有限公司 Collaborative drawing method, collaborative client and collaborative system based on CAD platform
CN116992517B (en) * 2023-09-28 2023-12-26 山东华云三维科技有限公司 Collaborative modeling method, server and terminal for three-dimensional CAD model
CN117313187A (en) * 2023-10-07 2023-12-29 中通服软件科技有限公司 Offline importing and merging method and device for CAD two-dimensional drawing and storage medium
CN117708903A (en) * 2023-12-04 2024-03-15 壹仟零壹艺网络科技(北京)有限公司 CAD drawing interaction management method and system based on multi-participant collaboration

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110163555A (en) * 2019-04-11 2019-08-23 厦门亿力吉奥信息科技有限公司 Collaboration drawing management method, storage medium
CN110929310A (en) * 2019-11-25 2020-03-27 杭州群核信息技术有限公司 Cloud drawing intelligent generation and editing method

Also Published As

Publication number Publication date
CN119272390A (en) 2025-01-07

Similar Documents

Publication Publication Date Title
CN110488756B (en) Method and system for automatically calculating numerical control multi-row drill processing parameters of woodworking plate
US7492364B2 (en) System and method for creating and updating a three-dimensional model and creating a related neutral file format
US4845651A (en) Geometric modelling system
EP3667545A1 (en) System and method for customizing machined products
US5886897A (en) Apparatus and method for managing and distributing design and manufacturing information throughout a sheet metal production facility
US6065857A (en) Computer readable medium for managing and distributing design and manufacturing information throughout a sheet metal production facility
EP2741157B1 (en) Computer-implemented system and method for analyzing machined part manufacturability and performing process planning
US5828575A (en) Apparatus and method for managing and distributing design and manufacturing information throughout a sheet metal production facility
US7197372B2 (en) Apparatus and method for managing and distributing design and manufacturing information throughout a sheet metal production facility
US20140288892A1 (en) Modeless drawing windows for cad system
CN113360583A (en) Construction progress visualization method based on BIM model and monitoring image comparison
CN101387958B (en) Image data processing method and apparatus
US8843352B2 (en) System and methods facilitating interfacing with a structure design and development process
Fruchter Conceptual, collaborative building design through shared graphics
JP3803509B2 (en) Apparatus and method for distributing design and fabrication information throughout a sheet metal fabrication facility
CN119272390B (en) Cloud drawing intelligent editing method
Kalay Worldview: An integrated geometric-modeling/drafting system
Kösenciğ et al. Structural Plan Schema Generation Through Generative Adversarial Networks
Kocaturk et al. Exploration of interrelationships between digital design and production processes of free-form complex surfaces in a web-based database
Sun et al. Optimization strategy of architectural design based on data mining
Codex Advancements in Automation Techniques for CAD Model Generation and Variation in Aerospace Engineering
Shin et al. Developing ISO 14649-based conversational programming system for multi-channel complex machine tools
Lin et al. Optimization of Interior Engineering Design Process and Application Based on Enhancing Aesthetic Experience of Interactive Design
CN119783387A (en) A method for realizing MBD part model design change based on 3D annotation information
CN116012562A (en) A method to quickly complete the model making of BIM 3D visualization platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant