Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment," another embodiment "means" at least one additional embodiment, "and" some embodiments "means" at least some embodiments. Related definitions of other terms will be given in the description below.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
FIG. 1 is a flow chart illustrating a stroked display method according to some embodiments. As shown in fig. 1, an embodiment of the present disclosure provides a method for displaying an edge, which may be performed by an electronic device, and in particular, may be performed by an edge display apparatus, where the apparatus may be implemented by software and/or hardware, and configured in the electronic device. As shown in fig. 1, the method may include the following steps.
In step 110, a virtual scene picture is displayed.
Here, the virtual scene screen includes a virtual scene. A virtual scene is a virtual scene that an application program displays or provides when running on an electronic device, where a virtual scene may refer to a game scene, such as a two-dimensional game scene, a three-dimensional game scene, a virtual reality scene, and so on.
It should be appreciated that the virtual scene picture may be a picture obtained by picture capturing of a virtual scene by a virtual camera.
In step 120, in response to the selection operation for the virtual object in the virtual scene, the outer contour of the virtual object is traced in the image space corresponding to the virtual scene picture, and outer contour tracing of the virtual object is obtained.
Here, the selection operation for the virtual object in the virtual scene may be a click operation, a box selection operation, or the like for the virtual object in the virtual scene. A virtual object may refer to an element used to build an environment in a virtual scene. For example, the virtual object may be a building, tree, etc. model in a virtual scene.
Outlining refers to the additional visual effect added to the outer contour of a virtual object. After a virtual object in the virtual scene is selected, the virtual object is in a selected state, and in order to display the virtual object in the selected state in the virtual scene, an outline description may be added to the virtual object to highlight the selected virtual object in the virtual scene.
The image space of the virtual scene is a two-dimensional space, which is a space composed of the width and the height of the virtual scene. The image space corresponding to the virtual scene refers to an image space that coincides with the image space of the virtual scene.
It is noted that the size of the image space of the virtual scene picture may be determined by the size of the screen used to display the virtual scene picture. Therefore, the outer contour of the virtual object is traced in the image space corresponding to the virtual scene picture, which can also be understood as tracing the outer contour of the virtual object in the screen space.
The virtual object may be rendered into an image space corresponding to the virtual scene picture, and then, in the image space, an outer contour of the virtual object is traced based on a preset tracing width, so as to obtain an outer contour tracing of the virtual object.
It should be noted that the tracing of the outline of the virtual object in the image space corresponding to the virtual scene is not to directly trace the outline of the virtual object in the virtual scene, but to trace the outline of the virtual object in the image space in an image space consistent with the image space of the virtual scene.
It should be appreciated that in embodiments of the present disclosure, the obtained outer contour tracing of the virtual object may be an image that includes the outer contour tracing of the virtual object.
In step 130, the outer contour tracing of the virtual object is superimposed on the virtual scene picture, wherein the width of the outer contour tracing remains unchanged as the virtual scene is scaled.
Here, the outer contour tracing of the virtual object in the selected state may be superimposed on the virtual scene picture, thereby displaying the outer contour tracing of the virtual object in the selected state.
As some examples, the obtained outer contour tracing of the virtual object may be superimposed on an upper display of the virtual scene picture.
As yet other examples, the obtained outer contour tracing of the virtual object may be fused at an upper layer of the virtual scene picture to display the outer contour tracing of the virtual object by displaying the fused virtual scene picture.
That is, in the embodiment of the present disclosure, the outer contour tracing of the virtual object may be superimposed and displayed by one image (the virtual scene picture after fusion), or the outer contour tracing of the virtual object may be superimposed and displayed by two images (the virtual scene picture and the outer contour tracing).
It should be noted that, since the outer contour tracing of the virtual object is obtained by tracing the outer contour of the virtual object in the image space corresponding to the virtual scene picture, even if the size of the displayed virtual object is changed, the width of the outer contour tracing of the virtual object before and after the size change is still the preset tracing width, so that the width of the outer contour tracing of the virtual object remains unchanged when the virtual scene is zoomed.
For example, when a virtual scene is enlarged, the virtual object is enlarged accordingly, which corresponds to a smaller line-of-sight distance between the player and the virtual object, and when the virtual scene is reduced, the virtual object is reduced accordingly, which corresponds to a larger line-of-sight distance between the player and the virtual object. At this time, according to the stroking display method provided by the embodiment of the present disclosure, since the outline stroking is obtained in the image space corresponding to the virtual scene, the outline stroking of the virtual object is related to the image space of the virtual scene, and even if the virtual scene in the virtual scene changes, the width of the outline stroking of the virtual object is not affected. Therefore, by displaying the outer contour tracing of the virtual object obtained from the image space superimposed on the virtual scene image, the width of the outer contour tracing of the enlarged virtual object can be made uniform in player's vision with respect to the width of the outer contour tracing of the reduced virtual object, and it is achieved that the width of the outer contour tracing remains unchanged at the time of scaling of the virtual scene.
It should be noted that, if the outline tracing is performed on the virtual object in the geometric space of the virtual scene, the outline tracing corresponds to a line in the virtual scene, when the virtual scene is zoomed out and zoomed in, the outline tracing is correspondingly zoomed out and zoomed in, so that the width of the outline tracing changes along with the change of the sight line distance, and when the sight line distance becomes gradually larger, even the user cannot see the outline tracing of the virtual object. Moreover, the outer contour tracing is also affected by perspective, resulting in a larger width outer contour tracing at a near location and a smaller width outer contour tracing at a far location.
According to the tracing display method provided by the embodiment of the disclosure, when a player zooms a virtual scene, the zoomed virtual scene is presented through the virtual scene picture, and because the outer contour tracing is obtained in the image space corresponding to the virtual scene picture, even if the player zooms the virtual scene, the width of the outer contour tracing is kept consistent, so that the consistency of the outer contour tracing is ensured.
Therefore, by displaying the virtual scene picture, responding to the selection operation of the virtual object in the virtual scene, carrying out edge tracing on the outline of the virtual object in the image space corresponding to the virtual scene picture, obtaining the outline edge tracing of the virtual object, and superposing and displaying the outline edge tracing of the virtual object on the virtual scene picture, the width of the outline edge tracing can not be influenced when the virtual scene is zoomed, and the outline edge tracing is always kept consistent, so that the consistency of edge tracing effects is ensured.
In some implementations, the outer contour tracing of the virtual object is a complete outer contour tracing of the virtual object when the virtual object is at least partially occluded.
Here, the virtual object being at least partially occluded may mean that a part or all of the virtual object is occluded. FIG. 2 is a schematic diagram illustrating a virtual object being at least partially occluded, according to some embodiments. As shown in fig. 2, in a virtual scene 201, a first virtual object 202 is occluded by a second virtual object 203 and a third virtual object 204. In this scenario, the outer contour tracing 205 of the first virtual object 202 is the complete outer contour tracing of the first virtual object 202.
It should be noted that, the complete outline tracing of the virtual object refers to an outline tracing obtained by tracing a complete outline line of the virtual object. Even if the virtual object is at least partially occluded in the virtual scene, the obtained outer contour tracing is still the complete outer contour tracing of the virtual object.
Therefore, through the embodiment, even if the virtual object in the selected state in the virtual scene is partially or completely blocked by other virtual objects, the complete outline description of the virtual object in the selected state can be displayed, so that the player can clearly identify the virtual object in the selected state in the virtual scene.
FIG. 3 is a flow chart illustrating a stroked display method according to further embodiments. As shown in fig. 3, the following steps may be included.
In step 310, a virtual scene picture is displayed.
Here, the detailed description of step 310 may refer to the related description of step 110 in the above embodiments, and will not be repeated here.
In step 320, the virtual object is rendered to a first image, where the first image is an image constructed based on an image space corresponding to the virtual scene picture.
Here, the virtual object in the selected state in the virtual scene may be individually rendered to the first image. The first image may be an image constructed based on an image space corresponding to the virtual scene picture. That is, the width and height of the first image are consistent with the width and height of the virtual scene picture.
It should be appreciated that the size of the virtual objects in the first image is consistent with the size, presentation angle, etc. of the virtual objects in the virtual scene.
In Unity, the LAYERMASK functions provided by Unity may be used to individually render the virtual object in the selected state into the first image. The virtual object in the selected state may be set as a stroked level, and the level to be drawn is screened by FILTERINGSETTINGS in the DRAWRENDERERS function.
Of course, the CommandBuffer functions provided by Unity may also be used to specify the virtual object in the selected state, thereby rendering the virtual object in the selected state into the first image. Wherein CommandBuffer functions store a series of rendering commands, such as setting a rendering target or drawing a particular grid. Through CommandBuffer functions, custom rendering commands can be inserted and executed at specific points in the Unity built-in rendering pipeline, specifying rendering of specific virtual objects to control rendering flows at finer granularity.
It should be noted that, in the embodiment of the present disclosure, the specific implementation of rendering the virtual object to the first image is not limited, and any game engine or any rendering manner that can be implemented may be used to render the virtual object to the first image.
Fig. 4 is a schematic diagram of a virtual scene screen shown in accordance with some embodiments. As shown in fig. 4, in the virtual scene screen 401, a virtual object 402 is a virtual object in a selected state.
Fig. 5 is a schematic diagram of a first image shown according to some embodiments. As shown in fig. 5, a virtual object 402 in a virtual scene is rendered in a first image 403.
It should be noted that, even if the virtual object in the virtual scene is occluded, the virtual object in the virtual scene is still completely rendered in the first image. By rendering the virtual object to the first image, not only can the outer contour tracing of the virtual object be obtained through the first image, but also the complete outer contour tracing of the virtual object can be obtained.
In some embodiments, the virtual object is rendered to the first image with a solid color, the first image having a color different from the solid color.
The solid color in the embodiments of the present disclosure is used as a material, and the solid color may be used as a material of the virtual object, and then the virtual object with the solid color material is rendered in the first image. Illustratively, the solid color may be white and the first image may be black in color. Of course, the pure color can be other colors, and can be set according to actual conditions.
It should be noted that rendering the virtual object to the first image by using the solid color means that the solid color is used as a rendering material of the virtual object, and then the virtual object of the solid color material is rendered to the first image. Since the solid color does not contain complex texture details, the inner contour of the virtual object in the first image may be weakened. Wherein the inner contour refers to an edge line inside the virtual object, which defines the shape and boundaries of the internal structure of the virtual object. For example, the inner contour may be a void, a groove, or a boundary between different surfaces inside the virtual object. Further, by rendering the virtual object to the first image using a solid color, it is possible to facilitate the subsequent use of color channels to distinguish between pixels belonging to the virtual object and pixels not belonging to the virtual object in the first image.
As shown in fig. 5, a virtual object 402 is rendered in a first image 403 that is entirely black using white material.
In other embodiments, when the number of the selected virtual objects is a plurality of, rendering the plurality of virtual objects to the first image through the same solid color, so as to obtain an outline drawing corresponding to an overall outline formed by the plurality of virtual objects through the first image, wherein the color of the first image is different from the solid color.
The number of the selected virtual objects is a plurality, which means that the number of the virtual objects in the selected state in the virtual scene is a plurality. And rendering the plurality of virtual objects to the first image by using the same solid color, wherein the overlapped parts do not have color difference due to the fact that the plurality of virtual objects use the same solid color, so that the outer contour tracing corresponding to the whole outer contour formed by the plurality of virtual objects can be obtained through the first image.
It should be noted that, the overall outline formed by the plurality of virtual objects may refer to an outline curve formed by the plurality of virtual objects when they are combined together. When the plurality of virtual objects comprise overlapped virtual objects, the whole outline corresponding to the overlapped virtual objects is a closed curve formed by connecting all vertexes of the overlapped virtual objects according to a certain sequence. When the plurality of virtual objects includes non-overlapping virtual objects, the overall outer contour of the plurality of non-overlapping virtual objects is made up of the outer contours of the individual virtual objects. Thus, where the plurality of virtual objects includes overlapping virtual objects and non-overlapping virtual objects, the overall outline formed by the plurality of virtual objects may include a closed curve formed by the overlapping virtual objects and an outline formed by each non-overlapping virtual object separately.
In step 330, a second image is obtained that includes an outline description of the virtual object from the first image.
Here, a pixel position where a tracing is required may be calculated in the first graph, and then a second image including an outer contour tracing of the virtual object may be drawn based on the pixel position.
It should be noted that the second image may be an image constructed based on an image space corresponding to the virtual scene picture. That is, the width and height of the second image may be consistent with the width and height of the virtual scene picture and the first image.
Fig. 6 is a schematic diagram of a second image shown according to some embodiments. As shown in fig. 6, the second image 405 includes an outlining 404.
It is worth noting that the area of the second image other than the outline border may be transparent. Of course, the second image may also comprise only pixels of the outlining.
In step 340, a second image is superimposed on the virtual scene picture.
Here, the second image may be incorporated into the virtual scene screen to be superimposed on the virtual scene screen for display. For example, the outer contour tracing 404 shown in FIG. 6 may be incorporated into the virtual scene picture shown in FIG. 4.
FIG. 7 is a schematic diagram illustrating a display of an outline tracing, according to some embodiments. As shown in fig. 7, an outline border 404 of a virtual object 402 is displayed in a virtual scene screen 401.
Therefore, through the embodiment, the width of the outline tracing is not influenced when the virtual scene is zoomed, and the outline tracing is always consistent, so that the consistency of the tracing effect is ensured. Moreover, since the virtual object is individually rendered to the first image and the outer contour tracing of the virtual object is superimposed on the upper layer of the virtual scene picture, even if the virtual object is blocked in the virtual scene, the complete outer contour tracing of the virtual object can be displayed.
Fig. 8 is a detailed flowchart of step 330 shown in fig. 3. As shown in fig. 8, in some implementations that may be implemented, the following steps may be included.
In step 331, a first pixel point not belonging to the virtual object is determined in the first image based on the color channel to which the virtual object belongs.
Here, in the above-described embodiment, the virtual object is rendered by the solid color into the first image different from the solid color, and therefore, the first pixel point which does not belong to the virtual object can be determined in the first image by the color channel to which the solid color of the virtual object belongs and the color channel to which the background color of the first image belongs.
As shown in fig. 5, the white virtual object 402 and the background color (black) in the first image 403 belong to different color channels, and the first pixel point determined not to belong to the virtual object 402 can be determined in the first image 403 through the color channels.
For each pixel in the first image, whether the pixel belongs to the virtual object or not can be determined according to the color channel to which the pixel belongs, if the color channel of the pixel belongs to the color channel to which the virtual object belongs, the pixel belongs to the virtual object, and if the color channel of the pixel does not belong to the color channel to which the virtual object belongs, the pixel does not belong to the first pixel of the virtual object.
It should be noted that, through the color channel, whether each pixel point of the first image is inside or outside the virtual object can be distinguished in the first image, so as to accurately tracing the outline of the virtual object.
In step 332, for each first pixel, at least one pixel is sampled in the first image within a sampling range centered on the first pixel and having a predetermined stroked width as a radius.
Here, after determining the first pixel points, it may be determined for each first pixel point whether the first pixel point is a second pixel point for which edge tracing is required.
For each determined first pixel point, sampling at least one pixel point in the first image through a sampling range taking the first pixel point as a center and taking a preset tracing width as a radius. The preset tracing width can be set according to actual conditions. For example, the preset stroked width may be 1 pixel, 2 pixels, etc.
Fig. 9 is a schematic diagram of a sampling range shown according to some embodiments. As shown in fig. 9, a sampling range 902 is constructed with a first pixel 901 as a center and a tracing width as a radius, and a plurality of pixels are sampled from a first image through the sampling range 902.
It should be understood that some or all of the pixels within the sampling range are sampled pixels.
In step 333, in the case where the sampled pixel points have pixel points belonging to the virtual object, the first pixel point is determined as the second pixel point.
Here, when at least one pixel belonging to the virtual object exists in the sampled pixel, it is described that the first pixel is an edge point adjacent to the outer contour of the virtual object. At this time, the first pixel point is determined as a second pixel point where the tracing is required.
Fig. 10 is a schematic diagram of a second pixel point shown according to some embodiments. As shown in fig. 10, the gray pixel is the second pixel to be traced.
In step 334, a second image is generated according to the position information of the second pixel, where the pixel corresponding to the second pixel in the second image includes a stroked color.
Here, after all the second pixel points are determined in the first image, based on the position information of the second pixel points in the first image, a second image is generated,
For example, a second image including an outline border of the virtual object may be obtained by coloring, on a transparent image that belongs to the same image space as the first image, a pixel in the transparent image that matches the position information of the second pixel by a preset border color.
In some embodiments, the pixel point corresponding to the second pixel point in the initial image may be colored according to the position information of the second pixel point, and the colored pixel point may be subjected to gaussian blur processing based on a gaussian blur algorithm, so as to obtain the second image.
Wherein the initial image may be an image having a width and a height that are identical to those of the first image. The initial image may refer to a transparent image, for example. After the position information of the second pixel point is obtained, coloring the pixel point corresponding to the second pixel point in the initial image by using a preset tracing color, and performing Gaussian blur processing on the colored pixel point by using a Gaussian blur algorithm.
The gaussian blur algorithm is used to spread the blur effect of the colored pixel points to the side away from the virtual object. The side far from the virtual object refers to an external space outside the outer contour of the virtual object, which does not belong to the virtual object. The Gaussian blur algorithm enables the blur effect to spread to the outside of the virtual object, namely the edge tracing effect can be optimized, and rendering of the virtual object is not affected.
The Gaussian blur algorithm can reduce the jaggy feeling of the outline tracing by diffusing the blurring effect of the colored pixel points to the side far away from the virtual object, so that the outline tracing effect is smoother and more natural.
Therefore, through the embodiment, the virtual object is rendered to the first image, and the outline tracing of the virtual object can be accurately and clearly drawn by distinguishing the pixel points which do not belong to the virtual object, so that the inner outline inside the virtual object is prevented from being drawn.
The following illustrates a description of a method of displaying a tracing edge according to an embodiment of the present disclosure with reference to fig. 11 to 16.
FIG. 11 is a schematic diagram of a virtual scene screen shown according to some embodiments. As shown in fig. 11, in response to a selection operation for a virtual object in a virtual scene, a corresponding virtual scene picture is acquired.
Fig. 12 is a schematic diagram of a first image shown according to some embodiments. As shown in fig. 12, in response to a selection operation for a virtual object in a virtual scene, a virtual object 1202 in the virtual scene that needs to be stroked is rendered into a first image 1201 using a solid color.
Fig. 13 is a schematic diagram of an outer contoured tracing shown in accordance with some embodiments. As shown in fig. 13, a second image 1301 including an outer contour tracing 1302 corresponding to a virtual object 1202 is obtained by the first image 1201 shown in fig. 12.
FIG. 14 is a schematic diagram illustrating a display of an outline tracing, according to some embodiments. As shown in fig. 11, 13, and 14, the outer contour outline 1302 shown in fig. 13 is incorporated into the virtual scene screen shown in fig. 11, so that the outer contour outline 1302 corresponding to the virtual object 1202 is displayed in the virtual scene screen.
Fig. 15 is a schematic diagram illustrating gaussian blur effects according to some embodiments. As shown in fig. 15, when the outline description is drawn, the directional blurring effect calculation is performed by the gaussian blurring algorithm so that the blurring effect of the outline description spreads to the side away from the virtual object.
FIG. 16 is a schematic diagram illustrating a display of an outlining according to further embodiments. As shown in fig. 16, the outer contour tracing after the gaussian blur processing in fig. 15 is combined in the virtual scene picture, so as to reduce the jaggy feel of the outer contour tracing, and make the outer contour tracing effect smoother and more natural.
Fig. 17 is a schematic diagram of a structure of a described display device, shown according to some embodiments. As shown in fig. 17, an embodiment of the present disclosure provides a stroked display apparatus 1700, the stroked display apparatus 1700 including:
a first display module 1701 configured to display a virtual scene picture;
A tracing module 1702 configured to, in response to a selection operation for a virtual object in the virtual scene, tracing an outer contour of the virtual object in an image space corresponding to the virtual scene picture, to obtain an outer contour tracing of the virtual object;
A second display module 1703 configured to superimpose and display an outer contour tracing of the virtual object on the virtual scene picture, wherein a width of the outer contour tracing remains unchanged when the virtual scene is zoomed.
Optionally, when the virtual object is at least partially occluded, the outer contour tracing of the virtual object is a complete outer contour tracing of the virtual object.
Optionally, the tracing module 1702 is specifically configured to:
rendering the virtual object to a first image, wherein the first image is an image constructed based on an image space corresponding to the virtual scene picture;
Obtaining a second image comprising an outer contour tracing of the virtual object according to the first image;
The second display module 1703 is specifically configured to:
And superposing and displaying the second image on the virtual scene picture.
Optionally, the tracing module 1702 is specifically configured to:
and rendering the virtual object to a first image through the solid color, wherein the color of the first image is different from that of the solid color.
Optionally, the tracing module 1702 is specifically configured to:
And when the number of the selected virtual objects is multiple, rendering the multiple virtual objects to a first image through the same solid color so as to obtain outer contour tracing corresponding to the whole outer contour formed by the multiple virtual objects through the first image, wherein the color of the first image is different from the solid color.
Optionally, the tracing module 1702 is specifically configured to:
Determining a first pixel point which does not belong to the virtual object in the first image based on a color channel to which the virtual object belongs;
sampling at least one pixel point in the first image in a sampling range taking the first pixel point as a center and taking a preset tracing width as a radius for each first pixel point;
Determining the first pixel point as a second pixel point under the condition that the sampled pixel point has the pixel point belonging to the virtual object;
And generating a second image according to the position information of the second pixel point, wherein the pixel point corresponding to the second pixel point in the second image comprises a tracing color.
Optionally, the tracing module 1702 is specifically configured to:
Coloring a pixel point corresponding to the second pixel point in an initial image according to the position information of the second pixel point, and performing Gaussian blur processing on the colored pixel point based on a Gaussian blur algorithm to obtain the second image, wherein the Gaussian blur algorithm is used for enabling the blur effect of the colored pixel point to be diffused to one side far away from the virtual object.
For the logic of the method executed by each functional module in the above-mentioned description display device 1700, reference may be made to the portions of the method related to the above-mentioned embodiments, which are not described herein again.
Referring now to fig. 18, a schematic diagram of an electronic device (e.g., terminal device) 1800 suitable for practicing the embodiments of the disclosure is shown. The terminal devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and stationary terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 18 is merely an example, and should not impose any limitation on the functionality and scope of use of the embodiments of the present disclosure.
As shown in fig. 18, the electronic device 1800 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 1801, which may perform various appropriate actions and processes in accordance with programs stored in a Read Only Memory (ROM) 1802 or programs loaded from a storage 1808 into a Random Access Memory (RAM) 1803. In the RAM 1803, various programs and data required for the operation of the electronic device 1800 are also stored. The processing device 1801, the ROM 1802, and the RAM 1803 are connected to each other via a bus 1804. An input/output (I/O) interface 1805 is also connected to the bus 1804.
In general, devices may be connected to the I/O interface 1805 including input devices 1806 such as a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc., output devices 1807 including a Liquid Crystal Display (LCD), speaker, vibrator, etc., storage devices 1808 including magnetic tape, hard disk, etc., and communication devices 1809. The communication means 1809 may allow the electronic device 1800 to communicate with other devices, either wirelessly or by wire, to exchange data. While fig. 18 illustrates an electronic device 1800 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a non-transitory computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication device 1809, or from the storage device 1808, or from the ROM 1802. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing device 1801.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of a computer-readable storage medium may include, but are not limited to, an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to electrical wiring, fiber optic cable, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the electronic device may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be included in the electronic device or may exist alone without being incorporated into the electronic device.
The computer readable medium carries one or more programs, when the one or more programs are executed by the electronic device, the electronic device is caused to display a virtual scene picture, in response to a selection operation for a virtual object in the virtual scene, carry out edge tracing on an outer outline of the virtual object in an image space corresponding to the virtual scene picture to obtain an outer outline of the virtual object, and display the outer outline of the virtual object in a superimposed manner on the virtual scene picture, wherein the width of the outer outline is kept unchanged when the virtual scene is zoomed.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including, but not limited to, an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present disclosure may be implemented in software or hardware. The name of a module does not in some cases define the module itself.
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic that may be used include Field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems-on-a-chip (SOCs), complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims. The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.