CN121041679A - Virtual scene display method, device, computer equipment and storage medium - Google Patents
Virtual scene display method, device, computer equipment and storage mediumInfo
- Publication number
- CN121041679A CN121041679A CN202410643681.7A CN202410643681A CN121041679A CN 121041679 A CN121041679 A CN 121041679A CN 202410643681 A CN202410643681 A CN 202410643681A CN 121041679 A CN121041679 A CN 121041679A
- Authority
- CN
- China
- Prior art keywords
- virtual
- virtual object
- scene
- displayed
- display
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/52—Controlling the output signals based on the game progress involving aspects of the displayed game scene
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/53—Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
- A63F13/533—Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/53—Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
- A63F13/537—Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/53—Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
- A63F13/537—Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
- A63F13/5372—Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for tagging characters, objects or locations in the game scene, e.g. displaying a circle under the character controlled by the player
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- Optics & Photonics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The embodiment of the application discloses a virtual scene display method, a virtual scene display device, computer equipment and a storage medium, and belongs to the technical field of computers. The method comprises the steps of displaying a virtual scene, displaying a second virtual object belonging to a target type in a first display state in the virtual scene under the condition that an object detection function is started, and displaying information related to the second virtual object in the virtual scene under the condition that the second virtual object is aimed. According to the scheme provided by the embodiment of the application, a new interaction mode is realized, information related to other virtual objects can be checked without controlling the virtual objects at the home terminal to be close to the other virtual objects, time required for controlling the first virtual object to be close to the second virtual object is saved, the interaction effect can be ensured, the interaction efficiency is improved, and then the man-machine interaction efficiency is improved.
Description
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a virtual scene display method, a virtual scene display device, computer equipment and a storage medium.
Background
With the development of computer technology, games are becoming more popular. In current games, a user is able to control virtual objects to move in a virtual scene in order to talk to or view other virtual objects in the virtual scene, etc. However, the virtual object in the current game can only talk with other virtual objects or view other virtual objects under the condition of being close to other virtual objects, and the interaction effect of the interaction mode is poor and the interaction efficiency is low.
Disclosure of Invention
The embodiment of the application provides a virtual scene display method, a virtual scene display device, computer equipment and a storage medium, which can improve the interaction effect and further improve the interaction efficiency. The technical scheme is as follows:
in one aspect, a virtual scene display method is provided, the method including:
displaying a virtual scene, wherein a first virtual object is displayed in the virtual scene;
Displaying a second virtual object belonging to a target type in the virtual scene in a first display state under the condition that an object detection function is started, wherein the object detection function is used for detecting virtual objects belonging to the target type around the first virtual object, and the first display state indicates that the second virtual object is detected;
And displaying information associated with the second virtual object in the virtual scene when the second virtual object is aimed.
In one possible implementation manner, the displaying, in the virtual scene, information associated with the second virtual object when the second virtual object is already targeted includes:
And displaying information related to the second virtual object in the virtual scene under the condition that the sight in the virtual scene aims at the second virtual object.
In another possible implementation manner, the displaying, in the virtual scene, the target display special effect in response to the turning-on operation of the object detection function includes:
And responding to the starting operation, and displaying the target display special effect to diffuse to the periphery by taking the first virtual object as the center in the virtual scene.
In another possible implementation manner, in the case that the second virtual object is already targeted, after displaying the information associated with the second virtual object in the virtual scene, the method further includes:
and displaying a game prompt message in the virtual scene, wherein the game prompt message is used for prompting that the task of detecting the second virtual object is completed.
In another aspect, there is provided a virtual scene display apparatus, the apparatus including:
the display module is used for displaying a virtual scene, wherein a first virtual object is displayed in the virtual scene;
The display module is further configured to display, in the virtual scene, a second virtual object belonging to a target type in a first display state when an object detection function is turned on, where the object detection function is configured to detect virtual objects belonging to the target type around the first virtual object, and the first display state indicates that the second virtual object is detected;
The display module is further configured to display information associated with the second virtual object in the virtual scene when the second virtual object is aimed.
In one possible implementation manner, the display module is configured to switch, in the virtual scene, a display state of displaying the second virtual object to the first display state when the object detection function is turned on.
In another possible implementation manner, the display module is configured to switch, in the virtual scene, a display state in which the second virtual object is displayed to the first display state when the object detection function is turned on and the second virtual object is located within a target range, where the target range is a range centered on a position of the first virtual object and having a first distance as a radius.
In another possible implementation manner, the display module is configured to display, in the virtual scene, information associated with the second virtual object when a sight in the virtual scene aims at the second virtual object.
In another possible implementation manner, the display module is configured to switch, in the virtual scene, a display state of displaying the sight to a second display state, and display information associated with the second virtual object, where the second display state indicates that the sight has aimed at the detected virtual object.
In another possible implementation manner, the display module is configured to display, in the virtual scene, prompt information associated with the second virtual object when the second virtual object is targeted and the detailed information of the second virtual object is not displayed, where the prompt information is used to prompt that the detailed information of the second virtual object is not displayed, or display, in the virtual scene, profile information associated with the second virtual object when the second virtual object is targeted and the detailed information of the second virtual object is displayed.
In another possible implementation manner, the display module is further configured to highlight, in the virtual scene, the second virtual object in response to a detail detection operation on the prompt information, display, in the virtual scene, detail information of the second virtual object, and display, in the virtual scene, profile information associated with the second virtual object in response to a closing operation on the detail information.
In another possible implementation manner, the display module is further configured to display, in the virtual scene, a target display special effect in response to an on operation of the object detection function, where the target display special effect indicates that the object detection function is turned on.
In another possible implementation manner, the display module is configured to display, in response to the start operation, that the target display effect is spread around the first virtual object in the virtual scene.
In another possible implementation manner, the display module is further configured to, in the case where the virtual object of the target type exists around the first virtual object, switch, in the virtual scene, a display state of a display function entry to a third display state, where the function entry is an entry of the object detection function, and the third display state indicates that the virtual object of the target type exists around the first virtual object;
The display module is used for responding to the triggering operation of the function entry when the function entry is displayed in the third display state, and displaying the target display special effect in the virtual scene.
In another possible implementation manner, the display module is configured to switch, in the virtual scene, a display state of displaying the function entry to the third display state when the virtual object of the target type exists in a target range, where the target range is a range centered on a position of the first virtual object and centered on a first distance as a radius.
In another possible implementation manner, the display module is configured to determine, when a third virtual object exists within the target range, a distance between the first virtual object and the third virtual object, where the third virtual object belongs to the target type, and when the distance is smaller than a second distance, in the virtual scene, switch a display state of displaying the function entry to the third display state, where the second distance is a distance that the third virtual object can be detected.
In another possible implementation manner, the display module is configured to display, in response to the opening operation, a target display special effect in the virtual scene in a case where the first virtual object is located in the target area.
In another possible implementation manner, the display module is configured to display, in the virtual scene, information associated with the second virtual object and matched with behavior information of the first virtual object, when the second virtual object has been targeted.
In another possible implementation manner, the display module is configured to display, in the virtual scene, information associated with the second virtual object if the second virtual object has been aimed, and a distance between the first virtual object and the second virtual object is within a distance range.
In another possible implementation manner, the display module is further configured to, in the virtual scene, display distance prompt information when the distance between the second virtual object, the first virtual object, and the second virtual object is already aimed to be outside the distance range, where the distance prompt information is used to prompt that the distance between the first virtual object and the second virtual object is outside the distance range.
In another possible implementation manner, the display module is further configured to cancel display of the second virtual object in the virtual scene in response to a closing operation of the object detection function, or switch a display state of the second virtual object to a fourth display state in the virtual scene in response to a closing operation of the object detection function.
In another possible implementation manner, the display module is further configured to display, in the virtual scene, a game prompt information, where the game prompt is used to prompt that the task of detecting the second virtual object is completed.
In another possible implementation manner, the display module is configured to switch, in the virtual scene, a display state of an area in which the second virtual object is included and blocked by the obstacle to a transparent state when the object detection function is turned on and an obstacle exists between the second virtual object and the first virtual object, to display the second virtual object in the first display state, or to increase, in the virtual scene, the transparency of the obstacle to display the second virtual object in the first display state when the object detection function is turned on and an obstacle exists between the second virtual object and the first virtual object, or to display an outline of the second virtual object in the area in which the second virtual object is included and blocked by the obstacle when the object detection function is turned on and an obstacle exists between the second virtual object and the first virtual object, and to display the outline of the second virtual object in the first display state.
In another aspect, a computer device is provided, the computer device including a processor and a memory, the memory storing at least one computer program, the at least one computer program loaded and executed by the processor to implement operations performed by the virtual scene display method as described in the above aspects.
In another aspect, there is provided a computer readable storage medium having stored therein at least one computer program loaded and executed by a processor to implement the operations performed by the virtual scene display method as described in the above aspects.
In yet another aspect, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the operations performed by the virtual scene display method as described in the above aspect.
According to the scheme provided by the embodiment of the application, the object detection function is started, virtual objects belonging to the target type around the virtual object controlled by the local end are detected and displayed in a special display state, so that the detected information belonging to the target type is reminded to be checked in a targeting mode, so that the detected information belonging to the target type is directly displayed in a virtual scene, a new interaction mode is realized, the virtual object of the target type can be quickly detected, the virtual object of the target type is not required to be searched in the virtual scene by manpower any more, the time for detecting the virtual object of the target type is saved, the detection efficiency is improved, meanwhile, when the information related to the detected virtual object is checked, the information related to other virtual objects can be checked without the need of being controlled to be close to other virtual objects, the time required for controlling the first virtual object to be close to the second virtual object is saved, the interaction effect can be ensured, and the man-machine interaction efficiency is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of an implementation environment provided by an embodiment of the present application;
fig. 2 is a flowchart of a virtual scene display method according to an embodiment of the present application;
FIG. 3 is a flowchart of another virtual scene display method according to an embodiment of the present application;
FIG. 4 is a flowchart of a method for displaying a virtual scene according to an embodiment of the present application;
FIG. 5 is a flowchart of a method for displaying a virtual scene according to an embodiment of the present application;
FIG. 6 is a flowchart of a method for displaying a virtual scene according to an embodiment of the present application;
FIG. 7 is a schematic diagram of a virtual scene according to an embodiment of the present application;
FIG. 8 is a schematic diagram of another virtual scenario provided by an embodiment of the present application;
FIG. 9 is a schematic diagram of yet another virtual scenario provided by an embodiment of the present application;
FIG. 10 is a schematic diagram of yet another virtual scenario provided by an embodiment of the present application;
FIG. 11 is a schematic diagram of yet another virtual scenario provided by an embodiment of the present application;
FIG. 12 is a schematic diagram of yet another virtual scenario provided by an embodiment of the present application;
FIG. 13 is a schematic diagram of yet another virtual scenario provided by an embodiment of the present application;
FIG. 14 is a schematic diagram of yet another virtual scenario provided by an embodiment of the present application;
FIG. 15 is a flowchart of a method for displaying a virtual scene according to an embodiment of the present application;
Fig. 16 is a schematic structural diagram of a virtual scene display device according to an embodiment of the present application;
fig. 17 is a schematic structural diagram of a terminal according to an embodiment of the present application;
fig. 18 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the following detailed description of the embodiments of the present application will be given with reference to the accompanying drawings.
The terms "first," "second," "third," "fourth," "fifth," "sixth," and the like as used herein may be used to describe various concepts, but are not limited by these terms unless otherwise specified. These terms are only used to distinguish one concept from another. For example, a first virtual object may be referred to as a second virtual object, and similarly, a second virtual object may be referred to as a first virtual object, without departing from the scope of the application.
The terms "at least one", "a plurality", "each", "any" as used herein, at least one includes one, two or more, a plurality includes two or more, and each refers to each of the corresponding plurality, any of which refers to any of the plurality. For example, the plurality of virtual objects includes 3 virtual objects, and each refers to each of the 3 virtual objects, and any one of the 3 virtual objects can be the first virtual object, or the second virtual object, or the third virtual object.
In order to facilitate understanding of the embodiments of the present application, some terms related to the embodiments of the present application will be explained first:
The virtual scene is a virtual scene displayed (or provided) when the application runs on the terminal, namely a scene displayed when the terminal runs a game, and is also called a world scene. The virtual scene is a simulation environment for the real world, or a semi-simulated and semi-fictional virtual environment, or a purely fictional virtual environment. The virtual scene is any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, or a three-dimensional virtual scene, which is not limited in the present application. For example, a virtual scene includes sky, land, sea, etc., the land includes environmental elements of a desert, city, etc., and a user can control a virtual object to move in the virtual scene. Of course, the virtual scene also includes virtual objects, such as throwing objects, buildings, vehicles, and other props, and can also be used for simulating real environments under different weather conditions, such as sunny days, rainy days, foggy days, or night days. The variety of scene elements enhances the diversity and realism of virtual scenes.
Virtual object refers to a virtual character that is movable in a virtual scene, and the movable object is a virtual character, a virtual animal, a cartoon character, or the like. The virtual object is a virtual avatar in the virtual scene for representing a user. The virtual scene comprises a plurality of virtual objects, and each virtual object has a shape and a volume thereof in the virtual scene and occupies a part of space in the virtual scene. Alternatively, the virtual object is a character controlled by operating on a client, or an artificial intelligence set in the virtual environment by training (ARTIFICIAL INTELLIGENCE, AI), or a Non-player character set in the virtual scene (Non-PLAYER CHARACTER, NPC). Optionally, the virtual object is a virtual character playing an athletic in the virtual scene.
Virtual props-refers to props that can be used with virtual objects in a virtual scene. For example, the virtual prop is a virtual weapon, virtual car, or other virtual object, etc.
It should be noted that, the information (including but not limited to user equipment information, user personal information, etc.), data (including but not limited to data for analysis, stored data, presented data, etc.), and signals related to the present application are all authorized by the user or are fully authorized by the parties, and the collection, use, and processing of the related data is required to comply with the relevant laws and regulations and standards of the relevant countries and regions. For example, the information involved in the present application is obtained with sufficient authorization.
The virtual scene display method provided by the embodiment of the application is executed by the computer equipment. Optionally, the computer device is a terminal or a server. Optionally, the server is an independent physical server, or a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs (Content Delivery Network, content delivery networks), basic cloud computing services such as big data and artificial intelligent platforms, and the like. Optionally, the terminal is a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, a smart voice interaction device, a smart home appliance, a vehicle-mounted terminal, and the like, but is not limited thereto.
In some embodiments, the computer program according to the embodiments of the present application may be deployed to be executed on one computer device or on multiple computer devices located at one site or on multiple computer devices distributed across multiple sites and interconnected by a communication network, where the multiple computer devices distributed across multiple sites and interconnected by the communication network form a blockchain system.
In some embodiments, the computer device is provided as a terminal. FIG. 1 is a schematic diagram of an implementation environment provided by an embodiment of the present application. Referring to fig. 1, the implementation environment includes a terminal 101 and a server 102, and the terminal 101 and the server 102 are connected through a wireless or wired network.
The server 102 is used for providing the terminal 101 with data of the virtual scene, and the terminal 101 is used for displaying the virtual scene based on the data provided by the server 102.
In some embodiments, the terminal 101 has installed thereon an application served by the server 102, through which the terminal 101 can implement functions such as data transmission, games, and the like. Alternatively, the application is an application in the operating system of the terminal 101 or an application provided for a third party. For example, the application is a game application having a game function, but of course, the game application can also have other functions, such as a criticizing function, a shopping function, a navigation function, and the like.
The terminal 101 is configured to log in to an application based on an account number, display a virtual scene through the application, and in the process of displaying the virtual scene, the terminal 101 responds to an operation in the virtual scene, and interact with the server 102 through the application so as to respond to the operation in the displayed virtual scene.
Fig. 2 is a flowchart of a virtual scene display method provided by an embodiment of the present application, where the method is performed by a terminal, as shown in fig. 2, and the method includes:
201. the terminal displays a virtual scene in which a first virtual object is displayed.
In the embodiment of the application, the terminal displays the first virtual object in the virtual scene by displaying the virtual scene, so that the terminal controls the first virtual object to move in the virtual scene or execute other operations so as to interact with other virtual objects in the virtual scene, for example, a user controls the first virtual object to talk with other virtual objects, view information associated with other virtual objects or fight with other virtual objects through the terminal, and the like.
The first virtual object may be a virtual object controlled by the local end, or may be other virtual objects in the virtual scene. The first virtual object is any type of virtual object, for example, the first virtual object is a virtual character, a virtual animal, or other type of virtual object, etc.
In the embodiment of the application, the terminal can display the virtual scene with the first-person view angle or the third-person view angle under the condition of displaying the virtual scene, only the part of the first virtual object is displayed in the virtual scene under the condition of displaying the virtual scene with the first-person view angle, and the complete first virtual object can be displayed in the virtual scene under the condition of displaying the virtual scene with the third-person view angle. For example, when a virtual scene is displayed with a first person perspective, only the arms of the virtual object are displayed in the virtual scene. For another example, when the virtual scene is displayed with a third person called a viewing angle, all of the virtual objects are displayed in the virtual scene.
202. And when the object detection function is started, the terminal displays a second virtual object belonging to the target type in a first display state in the virtual scene, wherein the object detection function is used for detecting virtual objects belonging to the target type around the first virtual object, and the first display state indicates that the second virtual object is detected.
In the embodiment of the application, the object detection function is used for detecting the virtual objects belonging to the special types around the first virtual object, and under the condition that the terminal has started the object detection function, the virtual objects belonging to the target types around the first virtual object controlled by the local terminal can be automatically detected, and the detected virtual objects are displayed in a special display state in a virtual scene so as to prompt which virtual objects are detected, so that the terminal can control the virtual objects to interact with the detected virtual objects later.
In the embodiment of the application, under the condition that the object detection function is not started, the virtual object of the target type can not be displayed in the virtual scene, and only under the condition that the object detection function is started, the virtual object of the target type can not be displayed in the virtual scene. Or under the condition that the object detection function is not started, the virtual object of the target type can be displayed in the virtual scene, but the virtual object of the target type is the same as the display state of other virtual objects in the virtual scene, under the condition that the object detection function is started, the virtual object of the target type can be displayed in a special display state, namely in a first display state, and other virtual objects are not displayed in the first display state, so that a user can know which virtual objects are detected and belong to the virtual object of the target type by looking at the displayed virtual objects.
Wherein the first display state is different from the display state of the virtual object that is not detected. The first display state is an arbitrary display state, for example, the first display state is to display a red special effect around the virtual object or the virtual object is displayed in an enlarged manner. The second virtual object is any type of virtual object, for example, the second virtual object is a virtual character, a virtual object, a virtual animal, a virtual building, or the like. In the embodiment of the application, the first virtual object and the second virtual object may be represented by the same type of virtual object, or may be represented by different types of virtual objects, for example, the first virtual object and the second virtual object are both virtual characters, or the first virtual object is a virtual character, and the second virtual object is a virtual article.
The object type is any type, for example, the object type is a stealth type, a task type, or other types. For example, a stealth type virtual object refers to a virtual object that cannot be displayed in a virtual scene without turning on an object detection function, and a task type virtual object refers to a virtual object associated with a task of a first virtual object. For example, in the case that the object detection function is not started, the virtual object of the task type cannot be displayed in the virtual scene, or in the case that the object detection function is not started, the virtual object of the task type can be displayed in the virtual scene and has the same display state as other virtual objects in the virtual scene, so that a user cannot know which virtual objects are related to the task, and the task refers to a task accepted by the first virtual object, such as a task for searching the virtual object or a task for searching an object.
203. And the terminal displays information related to the second virtual object in the virtual scene under the condition that the second virtual object is aimed.
In the embodiment of the application, under the condition that the virtual object of the target type is detected, the information related to the detected virtual object belonging to the target type can be checked by adopting a targeting mode so as to be directly displayed in the virtual scene, so that even if the first virtual object is not close to the second virtual object, the information related to the second virtual object can be checked, the time required for controlling the first virtual object to be close to the second virtual object is saved, and the interaction efficiency can be improved.
The information associated with the second virtual object is used for describing the second virtual object, and the information associated with the second virtual object can be represented in any form, for example, the information associated with the second virtual object is text, image or video.
According to the scheme provided by the embodiment of the application, the object detection function is started, virtual objects belonging to the target type around the virtual object controlled by the local end are detected and displayed in a special display state, so that the detected information belonging to the target type is reminded to be checked in a targeting mode, so that the detected information belonging to the target type is directly displayed in a virtual scene, a new interaction mode is realized, the virtual object of the target type can be quickly detected, the virtual object of the target type is not required to be searched in the virtual scene by manpower any more, the time for detecting the virtual object of the target type is saved, the detection efficiency is improved, meanwhile, when the information related to the detected virtual object is checked, the information related to other virtual objects can be checked without the need of being controlled to be close to other virtual objects, the time required for controlling the first virtual object to be close to the second virtual object is saved, the interaction effect can be ensured, and the man-machine interaction efficiency is improved.
Based on the embodiment shown in fig. 2, in the embodiment of the present application, the object detection function is turned on through a function entry of the object detection function in the scene interface, where the function entry can be displayed in different display states to indicate whether a virtual object of a target type exists around the first virtual object, and the specific process is as described in the following embodiment.
Fig. 3 is a flowchart of a virtual scene display method provided by an embodiment of the present application, where the method is performed by a terminal, and as shown in fig. 3, the method includes:
301. the terminal displays a virtual scene in which a first virtual object is displayed.
In one possible implementation, a second virtual object and other virtual objects are displayed in the virtual scene, both of which are displayed in a default display state.
The other virtual objects are virtual objects controlled by other terminals or artificial intelligent virtual objects. The other virtual objects are virtual characters, virtual animals, virtual objects, etc. The second virtual object is a virtual object of the target type, and the other virtual objects are not virtual objects of the target type, i.e. the other virtual objects are virtual objects of other types than the target type. The default display state is any display state different from the first display state, for example, when the virtual object is displayed in the default display state, the shape and the color of the virtual object can be displayed without additionally displaying the special effects, and when the virtual object is displayed in the first display state, the shape and the color of the virtual object can be displayed, and the additional special effects can be displayed around the virtual object.
In the embodiment of the application, when the object detection function is not started, even if the virtual scene displays the virtual object of the target type, the display state of the virtual object of the target type is the same as the display states of the virtual objects of other types, so that the target type cannot be distinguished from the virtual objects of other types under the condition that the object detection function is not started, the game interestingness is improved, and the user experience is further improved.
In one possible implementation manner, when the terminal displays the virtual scene, the displayed picture can be adjusted by the view angle adjusting operation or moving the virtual object, that is, the method further comprises the steps of responding to the view angle adjusting operation in the virtual object, displaying the adjusted virtual scene, which is shot by the virtual camera according to the adjusted view angle, and responding to the moving operation on the first virtual object, displaying the moved virtual scene and the first virtual object, wherein the moved virtual scene is shot by the virtual camera according to the moved position.
In the embodiment of the application, the visual angle adjusting operation is used for adjusting the shooting visual angle of the virtual camera, and the shooting visual angle of the virtual camera is adjusted through the visual angle adjusting operation, so that the virtual scene shot by the virtual camera can be changed. The virtual camera and the first virtual object are in a preset relative position relation, no matter how the first object moves in the virtual scene, the relative position relation between the virtual camera and the first virtual object is kept unchanged, and the virtual camera can shoot the virtual scene shot by the virtual camera according to any visual angle. In the process of moving the first virtual object, the position of the virtual camera in the virtual scene changes, and further the photographed virtual scene also changes. Accordingly, the displayed virtual scene is adjusted by a viewing angle adjustment operation or a movement operation of the first virtual object so as to adjust the content displayed in the virtual scene.
For example, only the first virtual object is displayed in the virtual scene without other virtual objects, and the displayed virtual scene is adjusted by the view angle adjustment operation or the movement operation of the first virtual object, so that the second virtual object is also displayed in the adjusted virtual scene.
302. And when the virtual object of the target type exists around the first virtual object, the terminal switches the display state of the display function entry to a third display state in the virtual scene, wherein the function entry is the entry of the object detection function, and the third display state indicates that the virtual object of the target type exists around the first virtual object.
In the embodiment of the application, the function entrance of the object detection function is displayed in the virtual scene, and is used for starting the object detection function, the object detection function is used for detecting virtual objects belonging to the target type around the first virtual object, if the virtual objects belonging to the target type around the first virtual object do not exist, even if the object detection function is started through the function entrance of the object detection function, the virtual objects of the target type are not displayed in the virtual scene in a special display mode, so that once the virtual objects of the target type around the first virtual object exist, the display state of the function entrance is switched in the virtual scene to prompt the virtual objects of the target type around the first virtual object, so that the object detection function is started in time to detect the virtual objects of the target type around through the function entrance, the effect of strong reminding is achieved, the user does not need to manually search the virtual objects belonging to the target type around, the automatic detection and reminding mode is realized, the user is provided with help to check the virtual objects of the target type, the man-machine interaction efficiency is further improved, and the user experience is further improved.
Wherein the function entry is represented in any form, e.g. the function entry is represented in the form of a button. The third display state is a different display state of other portals or controls in the virtual scene, for example, the third display state is a highlighting state or a blinking state.
For example, in the case where there is no virtual object of the target type around the first virtual object, the function entry is displayed in a dark state, and in the case where there is a virtual object of the target type around the first virtual object, the function entry is highlighted.
In the embodiment of the application, under the condition that the object detection function is not started, whether the virtual object of the target type exists around the first virtual object or not is judged, and the display states of the function inlets of the object detection function are different. As shown in fig. 4, in the case of displaying the virtual scene, whether a virtual object of a target type exists around the first virtual object is detected in real time, in the case of not existing around the first virtual object, the terminal displays the function entry in a fifth display state in the virtual scene, the fifth display state is a normal display state, the function entry at this time is only a normal button, and once the virtual object of the target type exists around the first virtual object is detected, the terminal switches the display state of the function entry in the virtual scene to a third display state to display a display special effect corresponding to the function entry to prompt that the virtual object of the target type exists around the first virtual object.
In one possible implementation, in the case where there is no virtual object of the target type around the first virtual object, the function entry of the object detection function is the same as the display state of the other entries or controls.
In one possible implementation, in case that no virtual object of the target type exists around the first virtual object, the function entry of the object detection function is displayed in an inactive state, and in case that a virtual object of the target type exists around the first virtual object, the function entry of the object detection function is displayed in an active state.
In the embodiment of the application, when the function entry is displayed in the inactive state, the function entry cannot be triggered, and when the function entry is displayed in the active state, the function entry can be triggered, so that the situation that the target type virtual object cannot be displayed due to triggering under the condition that the target type virtual object does not exist around the first virtual object can be avoided, and the success rate of starting the object detection function can be ensured.
In one possible implementation, step 302 includes switching, in the virtual scene, the display state of the display function entry to a third display state in the event that a virtual object of the target type exists within a target range, the target range being a range centered on the location of the first virtual object and centered on the first distance as a radius.
In the embodiment of the application, under the condition that the object detection function is not started, whether the virtual object of the target type exists around the first virtual object can be detected according to the target range taking the position of the virtual object controlled by the local end as the center, once the virtual object of the target type exists around the first virtual object, the display state of the function entry is immediately switched to prompt the virtual object of the target type exists around the first virtual object, so that the target range is used as the detection range to detect, and the virtual object in any range is not detected any more, so that a user is prompted to control the first virtual object to move in a virtual scene to change the detection range, and then the virtual objects in different ranges are detected, and the game interactivity is improved.
Wherein the target range is a circular range, the first distance is an arbitrary distance, for example, the first distance is 10 or 20 meters. In the embodiment of the application, the target range is also equivalent to a reminding range, and when detecting that the virtual object belonging to the target type exists in the reminding range around the first virtual object, the display state of the function entry can be switched to remind.
Optionally, the target range is a detection range corresponding to the object detection function.
In the embodiment of the application, the target range is the detection range corresponding to the object detection function, and the detection can be performed in advance according to the detection range under the condition that the object detection function is not started, so that the early warning of the virtual objects with the target type around the first virtual object is realized, and the display effect is improved.
Optionally, the first distance is a detection distance corresponding to the first virtual object.
In the embodiment of the application, the detection distances corresponding to different virtual objects can be different, so that various possibilities of the game are enriched, and the interest of the game is improved.
Optionally, determining the first distance includes determining a detection distance corresponding to the detection auxiliary prop with which the first virtual object is equipped, and determining a sum of the detection distance corresponding to the detection auxiliary prop and a default distance of the first virtual object as the first distance.
In the embodiment of the application, the default detection distances corresponding to the virtual objects in the virtual scene are the same or different, the detection auxiliary props exist in the virtual scene, the detection auxiliary props can lift the detection distances for the virtual objects, and the sum of the default detection distances of the virtual objects and the detection distances corresponding to the equipped detection auxiliary props is used as the detection distance corresponding to the first virtual object, so that a user can be prompted to control the virtual objects to search the detection auxiliary props in the virtual scene to lift the detection distances, and the interactivity and the interestingness of the game are improved.
Optionally, different detection auxiliary props correspond to different detection distances. For example, the detection auxiliary prop comprises a plurality of virtual radars with different grades, and the higher the grade is, the larger the detection distance corresponding to the virtual radars is.
Optionally, the default detection distance corresponding to the virtual object is related to object information of the virtual object. For example, the object information indicates a occupation type of the virtual object or a camping to which the virtual object belongs, different occupation types correspond to different unit detection distances, different camping corresponds to different unit detection distances, and based on the object information of the first virtual object, a sum of the unit detection distance corresponding to the occupation type of the first virtual object and the unit detection distance corresponding to the type to which the first virtual object belongs is determined as a default detection distance corresponding to the virtual object.
Alternatively, the virtual object can be equipped with multiple detection-assist props simultaneously. In the embodiment of the application, the detection distances corresponding to different detection auxiliary props may be the same or different, and the virtual object can be simultaneously provided with a plurality of detection auxiliary props so as to promote the user to provide the first virtual object with the detection auxiliary props with higher detection distances as much as possible to promote the detection distance of the virtual object, thereby promoting the interest of the game and further promoting the user experience.
In one possible implementation manner, the process of switching the display state of the function entry comprises the steps of determining the distance between the first virtual object and the third virtual object under the condition that the third virtual object exists in the target range, wherein the third virtual object belongs to the target type, and switching the display state of the function entry into the third display state under the condition that the distance is smaller than a second distance in the virtual scene, wherein the second distance is the distance which can be detected by the third virtual object.
The first distance corresponds to a detection distance corresponding to the first virtual object, and the second distance corresponds to a detected distance of the third virtual object. The third virtual object is any virtual object belonging to the target type in the target range, and the third virtual object may be the same as the second virtual object or may be different from the second virtual object.
In the embodiment of the application, the virtual object not only corresponds to the detection distance, but also corresponds to the detected distance, and the terminal can detect the virtual object belonging to the target type around according to the detection distance corresponding to the controlled virtual object. Therefore, under the condition that the object detection function is not started, not only the virtual objects with surrounding belonging to the target type are detected according to the detection distance corresponding to the first virtual object, but also whether the third virtual object can be detected by the first virtual object or not is determined according to the detected distance corresponding to the third virtual object under the condition that the third virtual object is detected, the difficulty is increased for detecting the virtual object with the target type, the situation that the virtual object with the target type is too simple to detect and lose game interest is avoided, and the interest of the game is improved.
It should be noted that, in the embodiment of the present application, the virtual objects around the first virtual object and belonging to the target type are detected according to the target scope, and in another embodiment, whether the virtual objects of the target type exist around the first virtual object is determined according to the detected distance corresponding to each virtual object without considering the target scope, that is, the process of determining that the virtual objects of the target type exist around the first virtual object includes, for each virtual object of the target type in the virtual scene, determining the distance between the first virtual object and the virtual object of the target type based on the position of the first virtual object and the position of the virtual object of the target type in the virtual scene, and determining that the virtual objects of the target type exist around the first virtual object if the distance between the first virtual object and the virtual object of the target type is smaller than the detected distance corresponding to the virtual object of the target type.
In the embodiment of the application, one or more virtual objects of the target type exist in the virtual scene, and the virtual object of the target type may or may not be displayed in the virtual scene displayed by the terminal. Whether or not the target type virtual object is displayed in the current virtual scene is determined based on the position of the first virtual object and the positions of the plurality of target type virtual objects, the distance between the first virtual object and each target type virtual object can be determined, and then the circumference of the first virtual object can be determined, so that the display state of the function entrance of the object detection function can be updated in real time, whether or not the target type virtual object exists around the first virtual object can be ensured to be accurate enough, and the accuracy of game data can be ensured.
303. And the terminal responds to the triggering operation of the function entry under the condition that the function entry is displayed in a third display state, and displays a target display special effect in the virtual scene, wherein the target display special effect indicates that the object detection function is started.
In the embodiment of the application, the function entry is displayed in the third display state to indicate that the virtual object of the target type exists around the first virtual object, so that the object detection function is started through the function entry, the target display special effect can be displayed in the virtual scene, the current started object detection function is prompted through the displayed target display special effect, the reminding effect is achieved, the display effect can be improved, the object detection function can be started in time to check the virtual object belonging to the target type around the first virtual object, the situation that the object detection function is started and the virtual object of the target type cannot be detected is avoided, and the accuracy and the effectiveness of object detection can be ensured. Moreover, the object detection function is started through the function entrance, so that the user operation is facilitated, and the man-machine interaction efficiency is improved.
The target display effect is any type of effect, for example, the target display effect is a light curtain effect, and in the case that the object detection function is turned on, the displayed virtual scene is presented under the light curtain effect. For example, the target display effect is a green screen effect, or other color effect. When the object detection function is turned on, the virtual scene displayed is green.
In the embodiment of the application, when the object detection function is started, the terminal is equivalent to starting a special visual field, the terminal displays a virtual scene under the special visual field, and the virtual scene is displayed with a target display special effect so as to distinguish the virtual scene under the special visual field from the virtual scene under the normal mode, and in the virtual scene under the special visual field, virtual objects belonging to the target type around a first virtual object are called special virtual objects, and the special virtual objects can be displayed in a first display state.
In one possible implementation, the manner in which the display target display effect is displayed includes, in response to an on operation of the object detection function, diffusing the display target display effect around the first virtual object in the virtual scene.
In the embodiment of the application, the triggering operation of the function entry corresponds to the opening operation of the object detection function. In response to the starting operation of the object detection function, in the virtual scene, the target display special effect is displayed, and the target display special effect is diffused to the periphery by taking the first virtual object controlled by the local end as the center, so that a dynamic picture for detecting the periphery by taking the first virtual object as the center is displayed, the display effect is improved, and further the user experience is improved.
Optionally, the coverage of the target display effect is equal to the target range.
In the embodiment of the application, the coverage range of the target display special effect is equal to the target range, namely, the target display special effect is displayed in the target range which takes the first virtual object as the center, and the target range is the detection range corresponding to the first virtual object, so that the target display special effect can show the detection range of the object detection function, a user can know the detection range by checking the target display special effect, the effect of reminding the detection range is achieved, the user can timely adjust the position of the first virtual object by knowing the detection range, further, virtual objects in different ranges are detected, and the man-machine interaction efficiency is improved.
It should be noted that, in the embodiment of the present application, taking a function entry having an object detection function displayed in a virtual scene as an example, the object detection function is turned on and the target display special effect is displayed through the function entry, in another embodiment, the steps 302 to 303 are not required to be executed, but other modes are adopted, and in response to the turning-on operation of the object detection function, the target display special effect is displayed in the virtual scene.
It should be noted that, in the embodiment of the present application, the display of the target display effect when the object detection function is turned on is described as an example, and in another implementation, the above steps 302 to 303 are not required to be executed, but other manners are adopted to turn on the object detection function.
304. And when the object detection function is started, the terminal displays a second virtual object belonging to the target type in a first display state in the virtual scene, wherein the object detection function is used for detecting virtual objects belonging to the target type around the first virtual object, and the first display state indicates that the second virtual object is detected.
In one possible implementation, this step 304 includes switching, in the virtual scene, a display state in which the second virtual object is displayed to the first display state, in the event that the object detection function has been turned on.
In the embodiment of the application, before and after the object detection function is started, the display state of the second virtual object in the virtual scene is changed so as to show that the second virtual object belongs to the target type and is detected, so that a user can timely see the second virtual object which belongs to the target type and is detected, the reminding effect is achieved, the first virtual object and the second virtual object are controlled to interact subsequently, the virtual object which is around the first virtual object and belongs to the target type is not required to be searched manually, the interaction effect can be ensured, the interaction efficiency is improved, and the user experience is further improved.
The display state of the second virtual object is an arbitrary display state when the object detection function is not turned on, for example, a stealth state when the object detection function is not turned on. For another example, when the object detection function is not turned on, the display state of the second virtual object is the same as the display state of the other type of virtual object, and the first display state is the same as the display state of the other type of virtual object.
For example, when the object detection function is not turned on, the second virtual object is displayed in the stealth state, and when the object detection function is not turned on, the second virtual object cannot be displayed in the virtual scene even if the second virtual object is located in the field of view corresponding to the terminal, and when the object detection function is turned on, not only the original picture in the virtual scene but also the second virtual object can be displayed in the special display state, so that the virtual object in the stealth state can be displayed, and the user can interact with the virtual object in the stealth state.
For example, when the object detection function is not started, the virtual scene is displayed with a first virtual object, a second virtual object and other types of virtual objects are also displayed, the second virtual object is the same as the other types of virtual objects in display state, at this time, the user can only know that the second virtual object is currently displayed but cannot know that the second virtual object belongs to the target type by looking at the displayed virtual scene through the terminal, and when the object detection function is started, the second virtual object is displayed in a special display state in the virtual scene, and the user can know that the second virtual object belongs to the target type by looking at the displayed virtual scene through the terminal, so that the first virtual object and the second virtual object can be controlled to interact later.
Optionally, the process of displaying the second virtual object includes switching a display state of displaying the second virtual object to a first display state in the virtual scene when the object detection function is turned on and the second virtual object is located within a target range, the target range being a range centered on a position of the first virtual object and having a radius of the first distance.
In the embodiment of the application, under the condition that the object detection function is started, whether the virtual object of the target type exists around the first virtual object or not can be detected according to the target range taking the position of the virtual object controlled by the local end as the center, so that the virtual object which belongs to the target type and is detected can be timely displayed in the virtual scene, and therefore, the target range can be only used as the detection range to detect, so that the first virtual object is controlled to move in the virtual scene to change the detection range, and then the virtual objects in different ranges are detected, and the interactivity of the game is improved.
It should be noted that the process of determining the first distance is the same as the process of determining the first distance in step 302, and will not be described herein.
Optionally, the process of determining the second virtual object includes, in the virtual scene, switching a display state of displaying the second virtual object to the first display state in the virtual scene in a case where the object detection function has been turned on, the second virtual object is located within the target range, and a distance between the first virtual object and the second virtual object is smaller than a third distance, the third distance being a distance at which the second virtual object can be detected.
The first distance corresponds to a detection distance corresponding to the first virtual object, and the third distance corresponds to a detected distance of the second virtual object.
In the embodiment of the application, the virtual object not only corresponds to the detection distance, but also corresponds to the detected distance in the virtual scene, and the terminal can detect the surrounding virtual object according to the detection distance corresponding to the controlled virtual object, and the virtual object can only be detected within the detected distance. Therefore, when the object detection function is started, not only the virtual objects belonging to the target type around are detected according to the detection distance corresponding to the first virtual object, but also whether the second virtual object can be detected by the first virtual object is determined according to the detected distance corresponding to the second virtual object when the second virtual object is detected, so that the interest of the game is improved.
It should be noted that, in the embodiment of the present application, virtual objects around a first virtual object and belonging to a target type are detected according to a target range, and in another embodiment, whether a virtual object of a target type exists around the first virtual object is determined according to a detected distance corresponding to each virtual object without considering the target range, and then, when a virtual object of a target type exists around the virtual object and the virtual object is located within a field of view, a display state of displaying the virtual object in a virtual scene is switched to a first display state.
It should be noted that, the process of determining whether the target type virtual object exists around the first virtual object is the same as the process of determining whether the target type virtual object exists around the first virtual object in step 302, which is not described herein.
305. And the terminal displays information related to the second virtual object in the virtual scene under the condition that the second virtual object is aimed.
In one possible implementation, this step 305 includes displaying information associated with a second virtual object in the virtual scene with the sight in the virtual scene aimed at the second virtual object.
The sight is used for aiming at any virtual object in the virtual scene, and the sight can be represented in any form, for example, the sight is represented in the form of a dot or is represented in the form of a cross. In the embodiment of the application, in the displayed virtual scene, the display position of the sight is overlapped with the display position of the second virtual object, which means that the sight aims at the second virtual object.
In the embodiment of the application, the sight is displayed in the virtual scene, the virtual object which belongs to the target type and is detected is aimed through the sight, and the associated information can be directly displayed in the virtual scene, so that a user can aim any virtual object by controlling the position of the sight, the operation of the user is convenient, and the man-machine interaction efficiency is improved.
Optionally, the sight is always displayed in the center of the virtual scene.
In the embodiment of the application, in the process of displaying the virtual scene, the virtual scene displayed by the terminal is changed in response to the visual angle adjustment operation, but the sight glass is always displayed at the center position of the virtual scene, so that a user can timely check the position of the sight glass to execute corresponding operation, the situation that the position of the sight glass cannot be known due to the influence of a virtual scene picture is avoided, the display effect can be ensured, and the man-machine operation efficiency is also improved.
Optionally, the information displaying process includes that in the case that the sight aims at the second virtual object, in the virtual scene, the display state of the sight is switched to a second display state, information associated with the second virtual object is displayed, and the second display state indicates that the sight is aimed at the detected virtual object.
In the embodiment of the application, when the sight is aimed at the detected virtual object which belongs to the target type, the display state of the sight is also displayed in the virtual scene to change so as to remind the sight of aiming at the detected virtual object, so that a user can view the displayed information in time, the effect of strong reminding is achieved, the display effect is improved, and the user experience is improved.
The second display state is an arbitrary display state, and is different from the display state of the sight when the virtual object belonging to the target type and detected is not aimed. For example, when the sight is not aimed at a virtual object which is of a target type and is detected, the display state of the sight is a stationary state or a default color display state, and when the sight is aimed at a virtual object which is of a target type and is detected, the first display state of the sight is a blinking state, an enlarged display state, a red display state, or the like.
It should be noted that, in the embodiment of the present application, the aiming of the second virtual object is taken as an example, and in another embodiment, the terminal can also aim the second virtual object in other manners when displaying the virtual scene.
In one possible implementation, the aiming frame is also displayed in the virtual scene, and the way of aiming the second virtual object comprises displaying information associated with the second virtual object in the virtual scene in the case that the second virtual object is located in the aiming frame.
In an embodiment of the application, the aiming box is used for aiming the virtual object, and the second virtual object is positioned in the aiming box to indicate that the second virtual object is aimed. Wherein the aiming block can be displayed at an arbitrary position in the display screen of the terminal.
In one possible implementation, targeting the second virtual object includes displaying information associated with the second virtual object in the virtual scene with the second virtual object in the target display position.
In the embodiment of the application, in the displayed virtual scene, the second virtual object can be displayed at any display position, and only when the second virtual object is positioned at the target display position, the second virtual object is aimed, and the information associated with the second virtual object is displayed in the virtual scene.
The target display position is an arbitrary position in a display screen of the terminal, for example, the target display position is a center position or an upper left corner position.
In one possible implementation, the displayed information also matches the behavior information of the first virtual object, i.e., the process of displaying information includes displaying, in the virtual scene, information associated with the second virtual object and matching the behavior information of the first virtual object, if the second virtual object has been targeted.
In the embodiment of the application, under the condition that the second virtual object is aimed, displaying the information which is related to the second virtual object and is matched with the behavior information of the first virtual object in the virtual scene, so that the displayed information is matched with the historical behavior of the first virtual object, the information required to be checked by the current first virtual object is predicted, the accuracy of information display is ensured, the information is not required to be selected by a user through triggering a selection operation by a terminal, the time required to check the information can be saved, and the information acquisition efficiency is ensured.
Wherein the behavior information of the first virtual object indicates a historical behavior of the first virtual object, e.g., the behavior information of the first virtual object indicates a task that the first virtual object is currently performing, or indicates that the first virtual object was fighted before and a life value is reduced, etc. Optionally, the behavior information of the first virtual object indicates a historical behavior of the first virtual object over a last time period, e.g., the behavior information indicates a historical behavior of the first virtual object over a first 10 minutes.
The second virtual object is associated with various information, and different information contains different contents. For example, the second virtual object is a virtual object used for providing information guidance in the virtual scene, any one of various information related to the second virtual object indicates which position can be replenished to promote the life value of the virtual object, and the other information indicates which place a certain virtual object can be acquired.
For example, behavior information of a first virtual object indicates a task that the first virtual object is currently executing, and information associated with a second virtual object and matching the behavior information of the first virtual object indicates how the task can be completed, or where to complete the task.
For another example, the behavior information of the first virtual object indicates that the first virtual object has been fighted before and the life value has decreased, and the information associated with the second virtual object and matching the behavior information of the first virtual object indicates where to go to be able to be replenished to promote the life value of the virtual object.
Optionally, during the virtual game, the terminal or the server stores behavior information of the virtual object controlled by the terminal, so that information associated with the viewed virtual object is displayed according to the behavior information.
In one possible implementation, this step 305 includes displaying information associated with the second virtual object in the virtual scene, where the second virtual object has been targeted, the distance between the first virtual object and the second virtual object is within the distance range.
In the embodiment of the application, the second virtual object also corresponds to the distance range in which the information is checked, that is, when the distance between the first virtual object and the second virtual object is within the distance range, the terminal displays the information related to the second virtual object so as to simulate the effect of checking the information related to the second virtual object by the first virtual object, thus adding a certain limiting condition for checking the information related to the virtual object and improving the interest of the game.
The distance range is an arbitrary range, for example, the minimum distance in the distance range is 10 meters, and the maximum distance in the distance range is 20 meters, that is, the terminal only displays the information related to the second virtual object when the distance between the first virtual object and the second virtual object is not less than 10 meters and not more than 20 meters.
Optionally, the method further comprises displaying distance prompt information in the virtual scene, wherein the distance prompt information is used for prompting that the distance between the first virtual object and the second virtual object is out of the distance range when the second virtual object is aimed and the distance between the first virtual object and the second virtual object is out of the distance range.
In the embodiment of the application, the second virtual object is aimed, which means that the user wants to view the information associated with the second virtual object, and if the distance between the first virtual object and the second virtual object is out of the distance range, the current position of the first virtual object cannot view the information associated with the second virtual object, so that the distance prompt information is displayed in the virtual scene to prompt that the current distance between the first virtual object and the second virtual object does not meet the distance range, so as to prompt the user to control the movement of the first virtual object through the terminal to change the distance between the first virtual object and the second virtual object, thereby achieving the effect of strong prompt and improving the user experience.
Optionally, the distance prompt information comprises first prompt information or second prompt information, the first prompt information is used for prompting that the first virtual object is too close to the second virtual object, the second prompt information is used for prompting that the first virtual object is too far away from the second virtual object, and the process of displaying the distance prompt information comprises the steps of displaying the first prompt information in a virtual scene when the distance between the first virtual object and the second virtual object is smaller than the minimum value in the distance range under the condition that the second virtual object is aimed, and displaying the second prompt information in the virtual scene when the distance between the first virtual object and the second virtual object is larger than the maximum value in the distance range.
In the embodiment of the application, under the condition that the distance between the first virtual object and the second virtual object is out of the distance range, different prompt information is displayed according to different distances between the first virtual object and the second virtual object so as to guide a user to adjust in time, so that the user can know how to adjust the position of the first virtual object so as to view information associated with the second virtual object, the information viewing efficiency can be ensured, and the interaction efficiency is further ensured.
According to the scheme provided by the embodiment of the application, the object detection function is started, virtual objects belonging to the target type around the virtual object controlled by the local end are detected and displayed in a special display state, so that the detected information belonging to the target type is reminded to be checked in a targeting mode, so that the detected information belonging to the target type is directly displayed in a virtual scene, a new interaction mode is realized, the virtual object of the target type can be quickly detected, the virtual object of the target type is not required to be searched in the virtual scene by manpower any more, the time for detecting the virtual object of the target type is saved, the detection efficiency is improved, meanwhile, when the information related to the detected virtual object is checked, the information related to other virtual objects can be checked without the need of being controlled to be close to other virtual objects, the time required for controlling the first virtual object to be close to the second virtual object is saved, the interaction effect can be ensured, and the man-machine interaction efficiency is improved.
The embodiment shown in fig. 3 is described by taking the example that the object detection function is turned on through the function entry of the object detection function, but in another embodiment, the object detection function can also be turned on in other manners, in one possible implementation manner, an object detection prop exists in the virtual scene, and the object detection prop has the object detection function, and in the case that the first virtual object is equipped with the object detection prop, the object detection function is turned on, so that the virtual object of the target type can be detected later according to the embodiment.
In the embodiment of the application, the object detection prop with the object detection function exists in the virtual scene, and under the condition that the object detection prop is equipped with the virtual object, whether the virtual object of the target type exists around the virtual object can be detected, so that the process of detecting the virtual object is convenient, the user can be prompted to control the virtual object to acquire the object detection prop in the virtual scene, and the interactivity of the game is improved.
Optionally, there are multiple object detection props in the virtual scene, different object detection props are used for detecting virtual objects of different types, and in the case that the first virtual object is equipped with any object detection props, according to the type corresponding to the equipped object detection props, whether virtual objects belonging to the type corresponding to the object detection props exist around the first virtual object is detected.
In the embodiment of the application, a plurality of object detection props for detecting different types of virtual objects exist in the virtual scene, so that different requirements can be met, and convenience in detecting different types of virtual objects is improved.
In the embodiment shown in fig. 3, the object detection function is turned on by the function entry of the object detection function as an example, but in another embodiment, the object detection function can be turned on only in a specific area. That is, the process of turning on the object detection function includes displaying a target display special effect in the virtual scene in response to the turning-on operation in a case where the first virtual object is located in the target area.
In the embodiment of the application, the object detection function can be started only when the first virtual object is positioned in the target area, so that the object detection function is limited, the situation that the game is unbalanced due to excessive use of the object detection function is avoided, and the fairness of the game is ensured.
The target area is any area in the virtual scene. For example, the target area is a detection area corresponding to a task of the first virtual object, and the task of the first virtual object indicates that the virtual object belonging to the target type in the target area is detected, so that the user can control the first virtual object to enter the target area through the terminal, and then the object detection function is started to detect the virtual object indicated by the task.
It should be noted that, based on the embodiment shown in fig. 3, an obstacle may exist between the first virtual object and the second virtual object, and if the second virtual object is detected to exist around the first virtual object, the second virtual object can still be displayed, that is, the process of displaying the second virtual object includes the following three manners.
In the first mode, when the object detection function is turned on and an obstacle exists between the second virtual object and the first virtual object, the display state of the area where the obstacle includes and blocks the second virtual object is switched to the transparent state in the virtual scene, and the second virtual object is displayed in the first display state.
Wherein the obstacle is any type of virtual obstacle, such as a wall of a virtual building or a virtual car. The region in which the obstacle includes and blocks the second virtual object is a partial region in the obstacle, and can be represented by an arbitrary shape, for example, a square, a circle, or an irregular shape. For example, the obstacle is a wall of a virtual building and the second virtual object is within the virtual building, then the area where the obstacle contains and blocks the second virtual object is a partial area on the wall.
In the embodiment of the application, even if an obstacle exists between the first virtual object and the second virtual object under the condition that the second virtual object belonging to the target type is detected, the display state of the area where the obstacle is included and the second virtual object is blocked can be switched to a transparent state, so that the second virtual object can be displayed, the condition that the second virtual object is blocked is avoided, the second virtual object can be aimed directly at for checking the associated information, the first virtual object does not need to be controlled to avoid the obstacle to check the second virtual object, the information associated with the second virtual object can be checked subsequently, and the interaction efficiency is improved.
In the second mode, when the object detection function is turned on and an obstacle exists between the second virtual object and the first virtual object, the transparency of the obstacle is increased in the virtual scene, and the second virtual object is displayed in the first display state.
In the embodiment of the application, under the condition that the transparency of the obstacle is increased, the virtual object blocked by the obstacle can be displayed, and along with the larger transparency of the obstacle, the clearer the virtual object blocked by the obstacle is displayed. For example, in the case where the transparency of the obstacle increases, the obstacle can only show a contour.
In the embodiment of the application, under the condition that the second virtual object belonging to the target type is detected, even if an obstacle exists between the first virtual object and the second virtual object, the transparency of the obstacle can be increased, so that the occluded second virtual object is displayed, the condition that the second virtual object is occluded is avoided, the second virtual object can be directly aimed at to view the associated information, the first virtual object is not required to be controlled to avoid the obstacle to view the second virtual object, the information associated with the second virtual object is convenient to view, and the interaction efficiency is improved.
In a third mode, when the object detection function is turned on and an obstacle exists between the second virtual object and the first virtual object, the outline of the second virtual object is displayed in a region where the obstacle includes and blocks the second virtual object, and the first display state is an outline display state.
In the embodiment of the application, the first display state is the outline display state, and even if an obstacle exists between the first virtual object and the second virtual object under the condition that the second virtual object belonging to the target type is detected, the outline of the blocked second virtual object is displayed in the area where the obstacle contains and blocks the second virtual object, so that a user can check the blocked second virtual object, the situation that the second virtual object is blocked is avoided, the second virtual object can be directly aimed at to check associated information later, the first virtual object does not need to be controlled to avoid the obstacle to check the second virtual object, the subsequent checking of the information associated with the second virtual object is facilitated, and the interaction efficiency is improved.
It should be noted that, in the embodiment shown in fig. 3, the second virtual object is displayed in the virtual scene as an example, and in another embodiment, when the object detection function is turned on, the second virtual object is not displayed in the virtual scene displayed by the terminal, and the user triggers the viewing angle adjustment operation or the movement operation of the virtual object through the terminal, so that the terminal updates the displayed virtual scene, so that the second virtual object displayed in the first display state in the virtual scene is continuously adjusted. The process of updating the displayed virtual scene by the terminal is the same as that of the above-mentioned adjustment of the displayed virtual scene in step 301, and will not be described again here.
On the basis of the embodiment shown in fig. 2, in the case of displaying information associated with the second virtual object, the embodiment of the present application can also view detailed information of the second virtual object, and the specific process is described in the following embodiment.
Fig. 5 is a flowchart of a virtual scene display method provided by an embodiment of the present application, where the method is executed by a terminal, for example, as shown in fig. 5, and the method includes:
501. the terminal displays a virtual scene in which a first virtual object is displayed.
The step 501 is the same as the step 301, and will not be described again.
502. And when the object detection function is started, the terminal displays a second virtual object belonging to the target type in a first display state in the virtual scene, wherein the object detection function is used for detecting virtual objects belonging to the target type around the first virtual object, and the first display state indicates that the second virtual object is detected.
The step 502 is similar to the step 304, and will not be described again.
503. And when the terminal aims at the second virtual object and the detailed information of the second virtual object is not displayed, displaying the prompt information related to the second virtual object in the virtual scene, wherein the prompt information is used for prompting the detailed information of the second virtual object which is not displayed.
In the embodiment of the application, a user can check the detail information related to the virtual object through the terminal, and can remind in a mode of displaying different information when aiming at the virtual object which belongs to the target type and is detected aiming at whether the detail information related to the virtual object is displayed or not, so that the user can know whether the detail information of the virtual object which belongs to the target type and is detected is checked before, the effect of strong reminding is achieved, the user can execute different operations based on whether the detail information is checked later, the flexibility and the interestingness of the game are improved, and the user experience is improved.
The details information is used for describing details of the second virtual object, for example, the second virtual object is a virtual object, and the details information is used for describing a name, a size, an object type, an object parameter and an object acquisition mode of the virtual object. For another example, the second virtual object is a virtual character, and the detailed information is used to describe the name of the virtual character, the state of the virtual character, the cause of such state, the affiliated camp, and the like. The hint information is arbitrary information, for example, hint information is "???"," or hint information is "location". Optionally, the hint information is highlighted in the virtual scene. In the embodiment of the application, the prompt information is highlighted so as to achieve the reminding effect.
In the embodiment of the application, whether the detail information of the second virtual object is displayed is equivalent to whether the first virtual object looks up the detail information of the second virtual object, so as to simulate the condition that the first virtual object detects the second virtual object.
Optionally, the process of determining whether the detail information of the second virtual object is displayed or not through the display record corresponding to the terminal, that is, determining whether the detail information of the second virtual object is displayed or not, includes obtaining a display record of the terminal, where the display record includes an object identifier, the detail information of the virtual object indicated by the object identifier is already checked, determining that the detail information of the second virtual object is already displayed when the display record includes the object identifier of the second virtual object, and determining that the detail information of the second virtual object is not already displayed when the display record does not include the object identifier of the second virtual object.
The display record is used for recording the object identification of the virtual object with the detailed information displayed, and optionally, the display record also contains the corresponding display time.
Optionally, maintaining the display record includes the terminal recording an object identification of the virtual object in the display record in response to a detail viewing operation of the virtual object by the terminal.
In the embodiment of the application, the detail viewing operation indicates to view the detail information of the virtual object, and once the detail viewing operation of the virtual object is detected, the detail information of the virtual object is displayed later when the user wants to view the detail information of the virtual object, so that the object identification of the virtual object is recorded in the display record, and the accuracy of the display record is ensured.
Optionally, when the object identifier is recorded in the display record, whether the object identifier to be recorded is included in the display record is checked, and if the object identifier to be recorded is not checked, the object identifier is recorded in the display record.
Note that, the display record corresponding to the terminal may be stored in the terminal or may be stored in the server. And when the display record is stored in the server, the terminal sends a detail query request to the server, wherein the detail query request is used for requesting whether the terminal has displayed the detail information associated with the carried object identification or not, the detail query request carries the object identification of the second virtual object, the server responds to the detail query request, queries based on the display record corresponding to the terminal in the above manner, returns a query result to the terminal, the query result indicates whether the terminal has displayed the detail information of the second virtual object, and the terminal displays prompt information or profile information based on the query result. For example, the terminal stores prompt information or profile information associated with the second virtual object, and the query result returned by the server to the terminal is only detail information indicating whether the terminal has displayed the second virtual object, so that the terminal displays the detail information based on the query result. For another example, the server stores the prompt information or the profile information associated with the second virtual object, and after the server queries, the query result returned to the terminal includes the prompt information or the profile information, and the terminal only needs to display the information included in the query result.
In the embodiment of the application, a display record corresponding to the terminal is stored in the terminal or the server so as to record whether the terminal displays the detail information of the second virtual object, namely whether the terminal checks the detail of the second virtual object, the terminal displays the profile information of the second virtual object only when aiming at the detected second virtual object so as to indicate that the detail is checked, and the terminal displays the prompt information of the second virtual object only when aiming at the detected second virtual object so as to indicate that the second virtual object is checked for the first time and the detail information of the second virtual object is not checked.
504. And the terminal responds to the detail detection operation of the prompt information, and in the virtual scene, the second virtual object is highlighted and the detail information of the second virtual object is displayed.
In the embodiment of the application, under the condition of displaying the prompt information of the second virtual object, the detail information of the second virtual object can be checked aiming at the prompt information so as to highlight the second virtual object in the virtual scene and display the detail information of the second virtual object, thereby ensuring the display effect.
In one possible implementation, the process of displaying the detail information includes displaying the second virtual object and the detail information only in the virtual scene, and the second virtual object is displayed in enlargement.
In the embodiment of the application, when the detail information of the second virtual object is checked, the second virtual object and the detail information are only displayed in the virtual scene, other contents except the second virtual object and the detail information are not displayed in the virtual scene any more, and the second virtual object is displayed in an enlarged mode, so that a user can check the second virtual object and the detail information, interference caused by other contents is avoided, a strong reminding effect is achieved, and a display effect is ensured.
Optionally, when the sight aims at the second virtual object, not only the prompt information of the second virtual object but also a detail viewing option are displayed in the virtual scene, the second virtual object is highlighted in the virtual scene in response to a triggering operation of the detail viewing option, and the detail information of the second virtual object is displayed.
Optionally, under the condition that the second virtual object is highlighted and the detail information is displayed, clicking operation on any position in the virtual scene is closing operation on the detail information, or closing options are also displayed in the virtual scene, and triggering operation on the closing options is responded, namely closing operation on the detail information.
In one possible implementation manner, the terminal responds to the detail detection operation of the prompt information, controls the virtual camera to shoot the virtual scene according to the camera position and the shooting view angle corresponding to the second virtual object, displays the virtual scene shot by the virtual camera, highlights the second virtual object in the virtual scene, and displays the detail information of the second virtual object.
In the embodiment of the application, the camera position and the shooting view angle corresponding to the virtual object of the target type can control the virtual camera to shoot according to the camera position and the shooting view angle corresponding to the virtual object when the detail information of the virtual object of any target type is checked, so that the shot virtual scene is displayed, and the display effect of the detail information of the second virtual object is ensured.
505. The terminal responds to the closing operation of the detail information, and the profile information associated with the second virtual object is displayed in the virtual scene.
Wherein the content included in the profile information is part of the content in the detail information, for example, the profile information includes only the name of the second virtual.
In the embodiment of the application, under the condition of displaying the prompt information of the second virtual object, the detail information of the second virtual object can be checked aiming at the prompt information, after checking the detail information, the original virtual scene is returned in response to the operation of closing the detail information, namely, the sight is displayed in the virtual scene to aim at the second virtual object, but the displayed prompt information is switched to be profile information, so that the process of checking the detail information can be reflected, and whether the detail information of the second virtual object is checked is shown, thereby achieving the reminding effect, being convenient for users to watch and improving the user experience.
In the embodiment of the present application, the details of the second virtual object are not viewed, but in another embodiment, the profile information associated with the second virtual object is directly displayed when the details of the second virtual object are viewed, that is, when the details of the second virtual object are aimed at and displayed, the profile information associated with the second virtual object is displayed in the virtual scene. And, the details can be subsequently viewed for profile information as per steps 504-505 described above.
According to the embodiment of the application, according to whether the detail information of the second virtual object is displayed or not, different information is displayed under the condition of aiming the second virtual object which belongs to the target type and is detected so as to achieve the reminding effect, so that a user can know whether the detail information related to the second virtual object is watched or not, further the condition that the detail information is watched and is not watched any more can be avoided, unnecessary operations can be reduced, and further the interaction effect is improved.
According to the scheme provided by the embodiment of the application, the object detection function is started, virtual objects belonging to the target type around the virtual object controlled by the local end are detected and displayed in a special display state, so that the detected information belonging to the target type is reminded to be checked in a targeting mode, so that the detected information belonging to the target type is directly displayed in a virtual scene, a new interaction mode is realized, the virtual object of the target type can be quickly detected, the virtual object of the target type is not required to be searched in the virtual scene by manpower any more, the time for detecting the virtual object of the target type is saved, the detection efficiency is improved, meanwhile, when the information related to the detected virtual object is checked, the information related to other virtual objects can be checked without the need of being controlled to be close to other virtual objects, the time required for controlling the first virtual object to be close to the second virtual object is saved, the interaction effect can be ensured, and the man-machine interaction efficiency is improved.
In addition to the embodiments shown in fig. 2 to 2, when the object detection function is turned on, the object detection function may be turned off, and after the object detection function is turned off, the virtual object to be displayed may be changed. In one possible implementation, the method further includes cancelling display of the second virtual object in the virtual scene in response to a closing operation of the object detection function, or switching a display state of the second virtual object to a fourth display state in the virtual scene in response to a closing operation of the object detection function.
In the embodiment of the application, the display state of the second virtual object is displayed in the virtual scene to change in response to the closing operation of the object detection function so as to indicate that the object detection function is closed currently, thereby achieving the reminding effect, improving the display effect of the virtual scene and further improving the user experience.
For example, the second virtual object is a virtual object that is invisible when the object detection function is not turned on, and if the second virtual object is displayed in a stealth state when the object detection function is not turned on, the second virtual object is canceled from being displayed when the object detection function is turned off, so that the user can learn that the object detection function is turned off by looking at the displayed second virtual object to change.
The fourth display state is an arbitrary display state, and for example, in the virtual scene, a virtual object that does not belong to the target type is displayed in the fourth display state.
For example, when the object detection function is not turned on, the second virtual object is displayed in the fourth display state, and when the object detection function is turned on, the second virtual object is displayed in the first display state, and when the object detection function is turned off, the display state of the second virtual object is restored to the fourth display state, that is, the second virtual object is displayed in the fourth display state.
Optionally, the manner of triggering the closing operation of the object detection function includes the following three manners.
In the first mode, when the object detection function is started, a closing option of the object detection function is displayed in the virtual scene, and the closing operation of the object detection function is detected in response to the triggering operation of the closing option.
The shutdown option can be any type of option for shutting down the object detection function.
Optionally, the closing option is a triggering operation of the terminal in response to a function entry of the object detection function in the virtual scene, and the function entry is displayed as the closing option in a switching manner.
For example, a function entry of an object detection function is displayed in a virtual scene, the function entry is used for starting an object detection operation, a user clicks the function entry through a terminal, the terminal detects the click operation on the function entry, the object detection function is started when the start operation of the object detection function is detected, the function entry is canceled to be displayed, a closing option is displayed at an original display position of the function entry, and the current started object detection function can be reminded.
In a second mode, the object detection function is turned on while the function entry of the object detection function is kept pressed, including in response to detection of a release operation of the function entry of the object detection function, corresponding to detection of a closing operation of the object detection function.
In the embodiment of the application, the function entrance of the object detection function is displayed in the virtual scene, the function entrance is used for opening the object detection operation, the user presses the function entrance and keeps pressing, the terminal is equivalent to detecting the opening operation of the object detection function, the object detection function is opened, and once the user releases the function entrance and does not keep pressing any more, the terminal is equivalent to detecting the releasing operation of the function entrance and also equivalent to detecting the closing operation of the object detection function.
In the third mode, when the duration of time that the object detection function has been turned on reaches the target duration, it corresponds to detection of a closing operation of the object detection function.
Wherein the target time period is an arbitrary time period, for example, the target time period is 10 minutes. The duration that the object detection function has been on is used to indicate how long the object detection function has been on.
In the embodiment of the application, the object detection function can only start the target time length at a time, so that the terminal or the server counts time under the condition that the object detection function is started, and the object detection function can be closed in time under the condition that the duration time length that the object detection function is started reaches the target time length, thereby avoiding the influence on the balance of the game caused by long-time starting of the object detection function and improving the interest of the game.
It should be noted that, the three modes of triggering the closing operation can be combined at will, and the details of the second virtual object are not displayed, as shown in fig. 6, and the virtual scene display method includes the following steps:
step 1, a terminal displays a first virtual object and a function entry of an object detection function in a virtual scene.
In the embodiment of the present application, after step 1, different operations on the function entry will perform different steps. Based on the click operation on the function entry, the following steps 2-4 will be performed, and based on the long press operation on the function entry, the following steps 5-9 will be performed.
And 2, responding to clicking operation of the function entrance, displaying the function entrance switch of the object detection function as a closing option of the object detection function, and displaying a target display special effect to represent that the object detection function is started currently.
The clicking operation means that the pressing time of the function entry is less than a first time, and the first time is any time, for example, the first time is 0.2 seconds.
And 3, displaying a second virtual object belonging to the target type in a first display state in the virtual scene by the terminal under the condition that the object detection function is started, displaying prompt information and detail viewing options associated with the second virtual object in the virtual scene under the condition that the sight in the virtual scene aims at the second virtual object, displaying the detail viewing interface, displaying the second virtual object in the detail viewing interface in an enlarged manner, displaying the detail information of the second virtual object in the detail viewing interface, responding to the triggering operation of closing the options in the detail viewing interface or responding to the clicking operation of any position in the detail viewing interface, closing the detail viewing interface, displaying the virtual scene, and displaying the profile information of the sight aiming at the second virtual object and the second virtual object in the virtual scene.
And 4, under the condition that the object detection function is started, under the condition that the duration of starting the object detection function reaches the target duration, or in response to the triggering operation of a closing option of the object detection function, canceling the display of the target display special effect so as to represent the closing of the object detection function.
It should be noted that, in the embodiment of the present application, only the step 3 is executed first and then the step 4 is executed, and in another embodiment, after the step 2, the object detection function is turned off once the duration of turning on the object detection function reaches the target duration or once the triggering operation of the turn-off option of the object detection function is detected.
And 5, displaying a target display special effect in response to the long-press operation of the function entrance so as to represent the current opening object detection function, and keeping the opening object detection function under the condition that the function entrance is kept pressed.
The long-press operation means that the pressing time of the function entry is not less than a first time, and the first time is any time, for example, the first time is 0.2 seconds.
And 6, displaying a second virtual object belonging to the target type in a first display state in the virtual scene by the terminal under the condition that the object detection function is started, displaying prompt information and detail viewing options associated with the second virtual object in the virtual scene under the condition that the sight in the virtual scene aims at the second virtual object, displaying a detail viewing interface in response to triggering operation of the detail viewing options, magnifying and displaying the second virtual object in the detail viewing interface, and displaying detail information of the second virtual object.
In the embodiment of the present application, after step 6, step 7 or step 8 can be performed.
And 7, under the condition that the pressing function entrance is kept, closing the detail viewing interface, displaying a virtual scene, and displaying a sight aiming second virtual object and profile information of the second virtual object in the virtual scene in response to a triggering operation of closing the option to the details in the detail viewing interface or in response to a clicking operation of any position in the detail viewing interface.
And 8, under the condition of displaying the detail viewing interface, responding to the detection of the release operation of the function entrance of the object detection function, keeping displaying the detail viewing interface, responding to the triggering operation of closing the options of the details in the detail viewing interface, or responding to the clicking operation of any position in the detail viewing interface, closing the detail viewing interface, displaying a virtual scene, displaying a sight aiming at the second virtual object, profile information of the second virtual object and closing options of the object detection function in the virtual scene, and canceling displaying a target display special effect to represent closing of the object detection function in response to the triggering operation of the closing options of the object detection function.
Wherein the release operation refers to an operation of not holding the pressing function entry any more.
In the embodiment of the application, when the detail view interface is displayed, the release operation of the function entry of the object detection function is detected, which means that the current switch is similar to the scheme in the step 3, and the detail view interface is displayed based on the click operation of the function entry, without closing the object detection function.
And 9, under the condition that the object detection function is started, under the condition that the duration of starting the object detection function reaches the target duration, or in response to the release operation of the function entrance of the object detection function, canceling the display of the target display special effect so as to represent the closing of the object detection function.
It should be noted that, in the embodiment of the present application, only the step 8 is executed before the step 9 is executed, and in another embodiment, after the step 5, the object detection function is turned off once the duration of the object detection function is up to the target duration, or once the release operation of the function entry of the object detection function is detected.
In the above embodiment, the function entry having the object detection function displayed in the virtual scene is taken as an example, and in another embodiment, the function entry having the object detection function is not displayed in the virtual scene, but a certain key on the input device is taken as the function entry having the object detection function, for example, a middle key of the mouse is taken as the function entry having the object detection function, the user clicks the middle key of the mouse, which corresponds to the clicking operation of the function entry in the above step 2, and the long pressing of the middle key of the mouse by the user corresponds to the long pressing operation of the function entry in the above step 5.
In the embodiment of the application, the second virtual object is an object associated with the task of the first virtual object, and after the detailed information of the second virtual object is checked, the progress of the task can be advanced so as to prompt how to execute the operation subsequently. In one possible implementation, in the case of viewing the information associated with the second virtual object and turning off the object detection function, the method further comprises displaying, in the virtual scene, the contrast prompt information, the contrast prompt being used to prompt that the task of detecting the second virtual object is completed.
The game prompting information is any type of information, for example, the game prompting information indicates that the task of detecting the second virtual object is completed and also indicates the next task to be executed.
For another example, the game play prompt information also indicates a prompt for game progress, so that the user can know how to execute the operation later after viewing the game play prompt information.
In the embodiment of the application, the game prompting information can be displayed under the condition that the information related to the second virtual object is checked and the object detection function is closed, so that the effect of game prompting is achieved.
In the above embodiment, the object detection function is turned on to display a virtual object of a target type, and in another embodiment, the object detection function is turned on to switch a virtual scene to another virtual scene, and the first virtual object can be controlled to move in the other virtual scene so as to detect virtual objects belonging to the target type around the first virtual object in the other virtual scene. In one possible implementation manner, the virtual scene in the above embodiment is referred to as a first virtual scene, and the virtual scene display method includes that a terminal displays the first virtual scene, a function entry of a first virtual object and an object detection function is displayed in the first virtual scene, when a virtual object of a target type exists around the first virtual object, a display state of the function entry is switched to a third display state, in response to a trigger operation of the function entry, the first virtual scene is switched to be displayed as a second virtual scene, the first virtual object, a target display special effect and a sight are displayed in the second virtual scene, a second virtual object belonging to the target type is displayed in the first display state in the second virtual scene, information related to the second virtual object is displayed in the second virtual scene when the sight aims at the second virtual object, and in response to a closing operation of the object detection function, the second virtual scene is switched to be displayed as the first virtual scene.
Wherein the second virtual scene contains content different from the first virtual scene. For example, the first virtual scene is a street type virtual scene, when the first virtual scene is displayed, the first virtual object can be shown to be located in the street, and the second virtual scene is a cave type virtual scene, when the second virtual scene is displayed, the first virtual object can be shown to be located in the cave, that is, the first virtual object keeps unchanged in position, the switching of the virtual scene can be realized only by starting an object detection function, and virtual objects which are around the first virtual object and belong to a target type can be detected in the switched second virtual scene.
Based on the embodiments shown above, during game development, for example, an open world action class RPG (Role-PLAYING GAME Role-playing game) game. The developer configures target configuration information, the target configuration information indicates the type of the virtual object in the virtual scene and the position of the virtual object in the virtual scene, and then in the game process, the terminal or the server can determine whether the virtual object of the target type exists around the first virtual object according to the target configuration information.
For example, as shown in table 1, the target configuration information includes an object identifier, a location, and a type identifier of a type to which the virtual object belongs, and the location is expressed in the form of coordinates. In the embodiment of the application, other types of virtual objects are added in the virtual scene besides the target type of virtual object. For example, when the object detection function is turned on, that is, when the virtual scene in the special view is displayed, the virtual object 2 in the virtual scene is displayed in a display state different from the first display state, so that the content displayed in the virtual scene in the special view is enriched. When the sight is aimed at the virtual object 2, the display state of the sight does not change, and the display state of the virtual object 2 does not change. For another example, if the type of the virtual object 3 in the virtual scene is not specified in the target configuration information, the virtual object 3 is not displayed in a special display state when the virtual scene in the special view is displayed.
TABLE 1
| Virtual object | Position of | Type(s) |
| Object identification 1 | Coordinate 1 | Type identifier 1 |
| Object identification 2 | Coordinates 2 | Type identifier 2 |
| Object identification 3 | Coordinates 3 | Type identifier 3 |
In addition, for the virtual object of the target type in the virtual scene, object information is configured for the virtual object of the target type, wherein the object information comprises an object identifier, a position, a type identifier of the type, a detected distance, a distance range, detail information, a camera position and shooting view angle of the virtual camera when the detail information is displayed, and a game prompt information and a task identifier after the detail information is displayed. The distance to be detected can be any distance, such as 10 meters. The distance range is a range in which information associated with the virtual object can be viewed, for example, the distance range is 5 meters to 7 meters. The details are used to describe details of the virtual object, such as "tv, size of tv, and where the tv can be obtained". The position and shooting view angle of the virtual camera when displaying the detail information refer to that when displaying the detail information of the virtual object, the virtual camera shoots according to the position and shooting view angle of the camera so as to display the picture shot by the virtual camera, so that the display effect of the detail information is ensured. The office prompting information after the detail information is displayed refers to how to execute the operation or which task to execute after the detail information is displayed, so as to realize guidance. The task identification indicates a task associated with viewing the detail information, namely, the task indicated by the task identification is viewing the detail information of the virtual object. In the embodiment of the application, when the detail information of the virtual object of the target type is displayed, the virtual camera shoots according to the position and shooting view angle of the virtual camera in the object information, and then the picture shot by the virtual camera is displayed, so that the virtual object of the target type can be highlighted. The game prompt information can prompt the task completion progress to prompt how to execute the operation later. The task identification indicates the task associated with the virtual object of the target type.
Based on the above-described embodiments, the embodiments of the present application further provide a schematic diagram of a virtual scene, as shown in fig. 7 to 14.
In the case that the terminal does not have a target type virtual object around the first virtual object, as shown in fig. 7, the displayed virtual scene displays a first virtual object 701, a function entry 702 of the object detection function, other controls, a thumbnail map, and the like, and the function entry 702 is the same as the display state of the other controls and is not highlighted, and at this time, the object detection function is not started yet. Also displayed in the virtual scene is a task prompt 703 for the first virtual object, the task prompt 703 prompting "go to business island seek clues" to prompt how to complete the task.
When a target type virtual object exists in a target range centering on the first virtual object, as shown in fig. 8, a display state of a function entry 801 of an object detection function in a virtual scene changes, and a second virtual object 802 of the target type is displayed in the virtual scene, where the display state of the second virtual object 802 is the same as that of other virtual objects in the virtual scene, and a user cannot learn that the second virtual object 802 is the target type virtual object by looking at the virtual scene.
The user opens the object detection function by clicking or long pressing the function entry of the object detection function by the terminal, as shown in fig. 9, displays a closing option 901 in the virtual scene, and displays a target display special effect that spreads around with the first virtual object as a center, the target display special effect being equivalent to the detection special effect, so as to embody a dynamic picture of the virtual object detecting the surrounding target type. The terminal controls the shooting visual angle of the virtual camera, displays the virtual scene shot by the adjusted virtual camera, displays the first virtual object in the virtual scene to the left, and displays the sight glass in the center so as to avoid the condition that the first virtual object in the virtual scene shields other virtual objects, thereby facilitating the user to check the displayed virtual object. In the case where the virtual camera is currently able to capture a second virtual object of the target type, the second virtual object 902 is displayed in the first display state in the virtual scene, and a fourth virtual object 903, which is displayed in the other display state, is also displayed in the virtual scene, the fourth virtual object not belonging to the target type. In the case where the center aims at the fourth virtual object 903, the display state of the sight 904 does not change.
The user adjusts the shooting angle of the virtual camera through the terminal to adjust the virtual scene displayed by the terminal, and when the terminal aims at the second virtual object, as shown in fig. 10, the display state of the sight 1001 changes in the virtual scene. When the distance between the first virtual object 1002 and the second virtual object 1003 is out of the distance range corresponding to the second virtual object 1003, and the first virtual object 1002 cannot view information associated with the second virtual object 1003, distance prompt information 1004 is displayed around the second virtual object 1003 to prompt that the distance between the first virtual object 1002 and the second virtual object 1003 is too far or too close.
When the distance between the first virtual object and the second virtual object is within the distance range corresponding to the second virtual object and the detailed information of the second virtual object is not displayed, the terminal displays, as shown in fig. 11, a presentation information 1102 and a detailed view option 1103 around the second virtual object 1101, and the presentation information 1102 is "???", to indicate that the detailed information of the second virtual object 1101 is not displayed yet.
When the terminal displays the detail information of the second virtual object while the distance between the first virtual object and the second virtual object is within the distance range corresponding to the second virtual object, as shown in fig. 12, the terminal displays profile information 1202 and detail view options 1203 around the second virtual object 1201, and the profile information 1202 is a "television" indicating that the detail information of the second virtual object 1201 has been displayed.
The user clicks the detail viewing option through the terminal, the terminal switches and displays the detail viewing option as a detail display interface, the virtual scene in the detail display interface is obtained by shooting the virtual scene by the virtual camera according to the camera position and the shooting view angle contained in the object information of the second virtual object, as shown in fig. 13, in the virtual scene, the second virtual object 1301 is highlighted, the detail information 1302 of the second virtual object 1301 is displayed, the second virtual object 1301 is displayed in a focusing mode, and the control and other virtual objects in the virtual scene are hidden. When the terminal has displayed the detail information 1302 of the second virtual object 1301, the server updates the display record corresponding to the terminal, and the detail information 1302 of the second virtual object 1301 has been recorded.
When the terminal displays the detail information, the user clicks an arbitrary position by the terminal, and the terminal switches and displays the detail display interface as a virtual scene in which the sight-line 1401 aims at the second virtual object 1402, and the profile information 1403 and the detail view options 1404 are displayed around the second virtual object 1402, as shown in fig. 14. And displays a prompt 1405 for the office in the virtual scene, such as "task completed, please complete the next task to xx" and so on.
Based on the scheme provided by the embodiment of the application, the information associated with the second virtual object can be checked without the first virtual object approaching the second virtual object, and for special virtual objects in a virtual scene, the first virtual object can not approach the special virtual objects through movement. For example, a flying virtual object exists in a virtual scene, a user cannot control the first virtual object to move to the vicinity of the flying virtual object in the air through a terminal, and information related to the flying virtual object in the air can be checked by adopting the scheme provided by the embodiment of the application.
Based on the embodiment shown above, the embodiment of the present application further provides a flowchart of a virtual scene display method, as shown in fig. 15, where the method includes:
step 1, a terminal displays a first virtual object and a function entry of an object detection function in a virtual scene, and a user can click or press the function entry for a long time through the terminal.
Step 2, the terminal detects the pressing operation of the function entrance and detects whether the pressing duration is longer than the first duration.
In an embodiment of the present application, after step 2, steps 3-13 are performed, or steps 14-26 are performed.
And 3, detecting a release operation of the function entrance by the terminal under the condition that the pressing time length is smaller than the first time length, determining the release operation as a click operation of the function entrance, adjusting a shooting visual angle of the virtual camera, displaying a virtual scene shot by the adjusted virtual camera, which is equivalent to starting a special visual field, displaying the virtual scene in the special visual field by the terminal, displaying a first virtual object in the virtual scene to the left, displaying a sight glass in the center, and displaying a target display special effect in the virtual scene to indicate starting of the object detection function.
And 4, the terminal detects whether the sight is aimed at the virtual object which is detected and belongs to the target type.
And 5, under the condition that the sight is aimed at the second virtual object, the terminal detects whether the distance between the first virtual object and the second virtual object meets the distance range corresponding to the second virtual object.
And 6, displaying distance prompt information by the terminal when the distance between the first virtual object and the second virtual object is out of the distance range corresponding to the second virtual object, wherein the current distance is prompted to be too short or too long by the distance prompt information.
And 7, determining whether the detailed information of the second virtual object is displayed or not by the terminal under the condition that the distance between the first virtual object and the second virtual object is in the distance range corresponding to the second virtual object.
And 8, displaying prompt information and detail viewing options in the virtual scene under the condition that the detail information of the second virtual object is not displayed by the terminal, wherein the prompt information is ???", so as to prompt that the detail information of the second virtual object is not displayed.
And 9, displaying profile information and detail viewing options in the virtual scene by the terminal under the condition that the detail information of the second virtual object is displayed so as to prompt that the detail information of the second virtual object is displayed.
Step 10, after step 8 or step 9, the terminal detects whether to click on the detail viewing option.
And 11, displaying a detail display interface when the terminal detects clicking operation on the detail viewing options, and displaying the second virtual object in an enlarged manner and detail information in the detail display interface.
And step 12, the terminal responds to clicking operation on any position in the detail display, and switches and displays the detail display to be a virtual scene, wherein a target display special effect, a sight aiming at a second virtual object and profile information of the second virtual object are displayed in the virtual scene.
And step 13, the terminal responds to the clicking operation of the function entry again, cancels the display of the target display special effect, and cancels the display of the sight glass to represent the closing of the object detection function.
And 14, under the condition that the pressing duration is not less than the first duration, the terminal determines that the pressing duration is the long pressing operation on the function entrance, adjusts the shooting visual angle of the virtual camera, displays the virtual scene shot by the adjusted virtual camera, displays a first virtual object in the virtual scene to the left, displays a sight glass in the center, and displays a target display special effect in the virtual scene to indicate that the object detection function is started.
Step 15, the terminal detects whether to keep pressing the functional entrance.
And step 16, the terminal cancels the display of the target display special effect and the sight glass to represent the closing of the object detection function under the condition that the terminal detects the release operation of the function entrance.
And step 17, the terminal detects whether the sight glass aims at the detected virtual object which belongs to the target type under the condition that the pressing function maintaining entrance is detected.
And 18, under the condition that the sight is aimed at the second virtual object, the terminal detects whether the distance between the first virtual object and the second virtual object meets the distance range corresponding to the second virtual object.
And 19, displaying distance prompt information by the terminal when the distance between the first virtual object and the second virtual object is out of the distance range corresponding to the second virtual object, wherein the current distance is prompted to be too short or too long by the distance prompt information.
And step 20, the terminal determines whether the detail information of the second virtual object is displayed or not under the condition that the distance between the first virtual object and the second virtual object is in the distance range corresponding to the second virtual object.
And 21, displaying prompt information and detail viewing options in the virtual scene by the terminal under the condition that the detail information of the second virtual object is not displayed, wherein the prompt information is ???", so as to prompt that the detail information of the second virtual object is not displayed.
And 22, displaying profile information and detail viewing options in the virtual scene by the terminal under the condition that the detail information of the second virtual object is displayed so as to prompt that the detail information of the second virtual object is displayed.
Step 23, after step 21 or step 22, the terminal detects whether to click on the detail view option.
And step 24, the terminal displays a detail display interface when detecting clicking operation on the detail viewing options, and the detail display interface displays the second virtual object in an enlarged mode and displays detail information.
And step 25, the terminal responds to clicking operation on any position in the detail display, and switches and displays the detail display to be a virtual scene, wherein a target display special effect, a sight aiming at a second virtual object and profile information of the second virtual object are displayed in the virtual scene.
And step 26, the terminal responds to the release operation of the function entry, and cancels the display target to display special effects and the display of the sight glass to represent the closing of the object detection function.
According to the scheme provided by the embodiment of the application, in the virtual scene, the information is quickly obtained through aiming and detail viewing, so that the convenience of viewing information related to the virtual object can be improved, the interaction efficiency is further improved, the virtual object and detail information can be displayed in an enlarged display mode after aiming, so that a user can immersion experience the virtual object to be viewed, interference caused by other displayed contents is avoided, the immersion feeling of the user is improved, the information viewing mode can be suitable for the virtual objects in different scenes, the situation that the virtual object cannot be close to the virtual object to be viewed and cannot view the information is avoided, different interaction requirements of the user can be met, and the user experience is improved.
On the basis of the embodiment shown above, the virtual scene display method provided by the embodiment of the application further comprises the following steps:
step 1, a terminal displays a virtual scene, wherein a first virtual object, a function entry of an object detection function and a configuration entry of the object detection function are displayed in the virtual scene;
and 2, the terminal responds to the triggering operation of the configuration entry, and displays a type configuration interface, wherein the type configuration interface is used for configuring the type of the virtual object detected by the object detection function.
And 3, the terminal responds to the input operation in the type configuration interface, determines the input target type, and responds to the closing operation of the type configuration interface, so that the virtual scene can be displayed.
In the embodiment of the application, a user can input any type in the type configuration interface through the terminal, for example, the type of virtual weapon, the type of virtual automobile, the type of virtual armor and the like are input in the type configuration interface.
And 4, the terminal responds to the triggering operation of the function entry in the virtual scene, and in the virtual scene, virtual objects which are around the first virtual object and belong to the configured target type are displayed in a first display state.
In the embodiment of the present application, in the case where an obstacle exists between the virtual object belonging to the configured target type and the first virtual object around the first virtual object, the blocked virtual object can be displayed in the manner provided in the above embodiment.
And 5, under the condition that the sight is aimed at the virtual object which is detected and belongs to the target type, the terminal can display the profile information and the detail viewing options of the virtual object, and respond to the triggering operation of the detail viewing options to display the detail information and the picking options of the virtual object.
And 6, the terminal responds to the triggering operation of the pick-up options to acquire the virtual object.
In the embodiment of the application, the type of the virtual object detected by the object detection function can be configured so as to detect the virtual object of the configured type under the condition that the object detection function is started, and under the condition that the virtual object of the configured type is detected, the information related to the virtual object can be checked without the first virtual object approaching the detected virtual object, and the virtual object can be acquired, so that the efficiency of searching the virtual object in the virtual scene is improved, the game interestingness is improved, and the user experience is further improved.
For example, according to the scheme provided by the embodiment of the application, a user configures the prop type detected by the object detection function through the terminal, then the terminal detects the virtual weapon around the first virtual object according to the configured prop type under the condition that the object detection function is started, no matter whether an obstacle exists between the virtual prop around the first virtual object and the first virtual object, the virtual weapon can be displayed, the user does not need to control the first virtual object to enter each building in sequence through the terminal to search for the virtual prop, and under the condition that the virtual prop around the first virtual object is detected, the user aims at the virtual prop through the terminal to control the sight, information of the virtual prop can be checked, such as checking the model, attack value, cartridge clip and the like of the virtual prop, when the user wants to control the first virtual object to pick up the virtual prop, the virtual prop can be picked up through the displayed pick-up options, the first virtual prop does not need to be controlled to get close to the virtual prop, and convenience is provided for the user to control the first virtual object to search for the virtual prop in the virtual scene.
Fig. 16 is a schematic structural diagram of a virtual scene display device according to an embodiment of the present application, as shown in fig. 16, where the device includes:
A display module 1601, configured to display a virtual scene, where a first virtual object is displayed;
The display module 1601 is further configured to display, in the virtual scene, a second virtual object belonging to the target type in a first display state, where the first display state indicates that the second virtual object is detected, the second virtual object belonging to the target type surrounding the first virtual object;
the display module 1601 is further configured to display, in the virtual scene, information associated with the second virtual object when the second virtual object is already targeted.
In one possible implementation, the display module 1601 is configured to switch, in the virtual scene, a display state of displaying the second virtual object to the first display state when the object detection function is turned on.
In another possible implementation manner, the display module 1601 is configured to switch, in the virtual scene, a display state of displaying the second virtual object to the first display state when the object detection function is turned on and the second virtual object is located within a target range, where the target range is a range centered on a position of the first virtual object and centered on the first distance as a radius.
In another possible implementation, the display module 1601 is configured to display, in the virtual scene, information associated with the second virtual object in a case where the sight in the virtual scene aims at the second virtual object.
In another possible implementation, the display module 1601 is configured to, in a case where the sight is aimed at the second virtual object, switch, in the virtual scene, a display state of the display sight to a second display state, and display information associated with the second virtual object, where the second display state indicates that the sight has been aimed at the detected virtual object.
In another possible implementation manner, the display module 1601 is configured to display, in the virtual scene, prompt information associated with the second virtual object when the second virtual object is targeted and the detailed information of the second virtual object is not displayed, the prompt information being used to prompt that the detailed information of the second virtual object is not displayed, or display, in the virtual scene, profile information associated with the second virtual object when the second virtual object is targeted and the detailed information of the second virtual object is displayed.
In another possible implementation manner, the display module 1601 is further configured to highlight, in response to a detail detection operation on the prompt information, the second virtual object in the virtual scene, display the detail information of the second virtual object, and display, in response to a closing operation on the detail information, profile information associated with the second virtual object in the virtual scene.
In another possible implementation, the display module 1601 is further configured to display, in response to an on operation of the object detection function, a target display special effect in the virtual scene, the target display special effect indicating that the object detection function has been turned on.
In another possible implementation, the display module 1601 is configured to, in response to an on operation, display a target display effect in the virtual scene to be diffused around the first virtual object.
In another possible implementation manner, the display module 1601 is further configured to, in a case where a virtual object of a target type exists around the first virtual object, switch, in the virtual scene, a display state of a display function entry to a third display state, where the function entry is an entry of the object detection function, and the third display state indicates that the virtual object of the target type exists around the first virtual object;
A display module 1601 for, in response to a trigger operation on the function entry in a case where the function entry is displayed in the third display state, displaying a target display special effect in the virtual scene.
In another possible implementation, the display module 1601 is configured to display, in the virtual scene, information associated with the second virtual object and matching behavior information of the first virtual object, where the second virtual object has been targeted.
In another possible implementation manner, the display module 1601 is configured to switch, in the virtual scene, the display state of the display function entry to the third display state when the virtual object of the target type exists in a target range, where the target range is a range centered on the position of the first virtual object and centered on the first distance as a radius.
In another possible implementation manner, the display module 1601 is configured to determine, if a third virtual object exists within the target range, a distance between the first virtual object and the third virtual object, where the third virtual object belongs to the target type, and switch, in the virtual scene, a display state of the display function entry to a third display state if the distance is smaller than a second distance, where the second distance is a distance at which the third virtual object can be detected.
In another possible implementation, the display module 1601 is configured to display a target display special effect in the virtual scene in response to the on operation in a case where the first virtual object is located in the target area.
In another possible implementation, the display module 1601 is configured to display, in the virtual scene, information associated with the second virtual object if a distance between the second virtual object, the first virtual object, and the second virtual object is already aimed at.
In another possible implementation manner, the display module 1601 is further configured to, in a case where the second virtual object has been aimed, where the distance between the first virtual object and the second virtual object is out of the distance range, display distance prompt information in the virtual scene, where the distance prompt information is used to prompt that the distance between the first virtual object and the second virtual object is out of the distance range.
In another possible implementation, the display module 1601 is further configured to cancel the display of the second virtual object in the virtual scene in response to a closing operation of the object detection function, or switch a display state of the second virtual object to a fourth display state in the virtual scene in response to a closing operation of the object detection function.
In another possible implementation manner, the display module 1601 is further configured to display, in the virtual scene, a game prompt message, where the game prompt is used to prompt that the task of detecting the second virtual object is completed.
In another possible implementation manner, the display module 1601 is configured to switch, in the virtual scene, a display state of an area in which the obstacle is included and blocks the second virtual object to a transparent state to display the second virtual object in the first display state when the object detection function is turned on and an obstacle exists between the second virtual object and the first virtual object, or to increase, in the virtual scene, a transparency of the obstacle to display the second virtual object in the first display state when the object detection function is turned on and an obstacle exists between the second virtual object and the first virtual object, or to display, in the area in which the obstacle is included and blocks the second virtual object, a contour of the second virtual object in the first display state when the object detection function is turned on and an obstacle exists between the second virtual object and the first virtual object.
It should be noted that, in the virtual scene display device provided in the above embodiment, only the division of the above functional modules is used as an example, in practical application, the above functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the computer device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the virtual scene display device and the virtual scene display method embodiment provided in the foregoing embodiments belong to the same concept, and specific implementation processes thereof are detailed in the method embodiment, and are not repeated herein.
The embodiment of the application also provides a computer device, which comprises a processor and a memory, wherein at least one computer program is stored in the memory, and the at least one computer program is loaded and executed by the processor to realize the operations executed by the virtual scene display method of the embodiment.
Optionally, the computer device is provided as a terminal. Fig. 17 shows a block diagram of a terminal 1700 provided by an exemplary embodiment of the present application. Terminal 1700 includes a processor 1701 and a memory 1702.
The processor 1701 may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like. The processor 1701 may be implemented in at least one hardware form of DSP (DIGITAL SIGNAL Processing), FPGA (Field-Programmable gate array) GATE ARRAY, PLA (Programmable Logic Array ). The processor 1701 may also include a main processor, which is a processor for processing data in a wake-up state, also called a CPU (Central Processing Unit ), and a coprocessor, which is a low-power processor for processing data in a standby state. In some embodiments, the processor 1701 may be integrated with a GPU (Graphics Processing Unit, image processor) for rendering and drawing of content required to be displayed by the display screen. In some embodiments, the processor 1701 may also include an AI (ARTIFICIAL INTELLIGENCE ) processor for processing computing operations related to machine learning.
Memory 1702 may include one or more computer-readable storage media, which may be non-transitory. Memory 1702 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1702 is used to store at least one computer program for execution by processor 1701 to implement the virtual scene display method provided by the method embodiments of the present application.
In some embodiments, terminal 1700 may also optionally include a peripheral interface 1703 and at least one peripheral. The processor 1701, memory 1702, and peripheral interface 1703 may be connected by a bus or signal line. The individual peripheral devices may be connected to the peripheral device interface 1703 by buses, signal lines or a circuit board. Specifically, the peripheral devices include at least one of radio frequency circuitry 1704, a display screen 1705, a camera assembly 1706, audio circuitry 1707, and a power source 1708.
The peripheral interface 1703 may be used to connect at least one Input/Output (I/O) related peripheral to the processor 1701 and the memory 1702. In some embodiments, the processor 1701, the memory 1702, and the peripheral interface 1703 are integrated on the same chip or circuit board, and in some other embodiments, either or both of the processor 1701, the memory 1702, and the peripheral interface 1703 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 1704 is configured to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuit 1704 communicates with a communication network and other communication devices through electromagnetic signals. The radio frequency circuit 1704 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuitry 1704 includes an antenna system, an RF transceiver, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuitry 1704 may communicate with other terminals through at least one wireless communication protocol. The wireless communication protocols include, but are not limited to, the world wide web, metropolitan area networks, intranets, various generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (WIRELESS FIDELITY ) networks. In some embodiments, the radio frequency circuit 1704 may also include NFC (NEAR FIELD Communication) related circuits, which are not limited by the present application.
The display screen 1705 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display 1705 is a touch display, the display 1705 also has the ability to collect touch signals at or above the surface of the display 1705. The touch signal may be input as a control signal to the processor 1701 for processing. At this point, the display 1705 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 1705 may be one, disposed on the front panel of the terminal 1700, in other embodiments, at least two, disposed on different surfaces or in a folded design of the terminal 1700, respectively, and in other embodiments, the display 1705 may be a flexible display, disposed on a curved surface or on a folded surface of the terminal 1700. Even more, the display 1705 may be arranged in a non-rectangular irregular pattern, i.e. a shaped screen. The display 1705 may be made of LCD (Liquid CRYSTAL DISPLAY), OLED (Organic Light-Emitting Diode) or other materials.
The camera assembly 1706 is used to capture images or video. Optionally, the camera assembly 1706 includes a front camera and a rear camera. The front camera is arranged on the front panel of the terminal, and the rear camera is arranged on the back of the terminal. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, the camera assembly 1706 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
The audio circuit 1707 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1701 for processing, or inputting the electric signals to the radio frequency circuit 1704 for voice communication. For purposes of stereo acquisition or noise reduction, the microphone may be multiple and separately disposed at different locations of the terminal 1700. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is used to convert electrical signals from the processor 1701 or the radio frequency circuit 1704 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, the audio circuit 1707 may also include a headphone jack.
A power supply 1708 is used to power the various components in the terminal 1700. The power source 1708 may be alternating current, direct current, disposable battery, or rechargeable battery. When the power source 1708 comprises a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
Those skilled in the art will appreciate that the structure shown in fig. 17 is not limiting and that terminal 1700 may include more or less components than shown, or may combine certain components, or may employ a different arrangement of components.
Optionally, the computer device is provided as a server. Fig. 18 is a schematic diagram of a server according to an embodiment of the present application, where the server 1800 may have a relatively large difference between configurations or performances, and may include one or more processors (Central Processing Units, CPUs) 1801 and one or more memories 1802, where the memories 1802 store at least one computer program, and the at least one computer program is loaded and executed by the processors 1801 to implement the methods according to the above-described method embodiments. Of course, the server may also have a wired or wireless network interface, a keyboard, an input/output interface, and other components for implementing the functions of the device, which are not described herein.
The embodiment of the application also provides a computer readable storage medium, in which at least one computer program is stored, the at least one computer program being loaded and executed by a processor to implement the operations performed by the virtual scene display method of the above embodiment.
The embodiment of the application also provides a computer program product, which comprises a computer program, wherein the computer program realizes the operation executed by the virtual scene display method of the embodiment when being executed by a processor.
Those of ordinary skill in the art will appreciate that all or a portion of the steps implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the above storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing description of the embodiments of the application is merely illustrative of the principles of the embodiments of the present application, and various modifications, equivalents, improvements, etc. may be made without departing from the spirit and principles of the embodiments of the application.
Claims (20)
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202410643681.7A CN121041679A (en) | 2024-05-22 | 2024-05-22 | Virtual scene display method, device, computer equipment and storage medium |
| PCT/CN2025/087795 WO2025241757A1 (en) | 2024-05-22 | 2025-04-08 | Virtual scene display method and apparatus, computer device and storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202410643681.7A CN121041679A (en) | 2024-05-22 | 2024-05-22 | Virtual scene display method, device, computer equipment and storage medium |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN121041679A true CN121041679A (en) | 2025-12-02 |
Family
ID=97794611
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202410643681.7A Pending CN121041679A (en) | 2024-05-22 | 2024-05-22 | Virtual scene display method, device, computer equipment and storage medium |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN121041679A (en) |
| WO (1) | WO2025241757A1 (en) |
Family Cites Families (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113209617A (en) * | 2021-06-10 | 2021-08-06 | 腾讯科技(深圳)有限公司 | Virtual object marking method and device |
| CN113457150B (en) * | 2021-07-16 | 2023-10-24 | 腾讯科技(深圳)有限公司 | Information prompting method and device, storage medium and electronic equipment |
| CN116196618A (en) * | 2021-11-30 | 2023-06-02 | 完美世界(北京)软件科技发展有限公司 | Game view control method and device, storage medium and electronic equipment |
| CN115193035A (en) * | 2022-07-06 | 2022-10-18 | 网易(杭州)网络有限公司 | Game display control method and device, computer equipment and storage medium |
| CN117815662A (en) * | 2022-09-27 | 2024-04-05 | 腾讯科技(深圳)有限公司 | Virtual prop control method, device, equipment, storage medium and program product |
-
2024
- 2024-05-22 CN CN202410643681.7A patent/CN121041679A/en active Pending
-
2025
- 2025-04-08 WO PCT/CN2025/087795 patent/WO2025241757A1/en active Pending
Also Published As
| Publication number | Publication date |
|---|---|
| WO2025241757A1 (en) | 2025-11-27 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN108619721B (en) | Distance information display method and device in virtual scene and computer equipment | |
| JP7476109B2 (en) | Method, device, terminal and computer program for controlling interaction between virtual objects and virtual scenes | |
| CN111672125B (en) | Virtual object interaction method and related device | |
| JP7191210B2 (en) | Virtual environment observation method, device and storage medium | |
| CN112915538B (en) | Game information display method, device, terminal and storage medium | |
| CN109788174B (en) | A kind of supplementary light method and terminal | |
| JP7601451B2 (en) | Method, device, and computer program for controlling virtual objects | |
| CN108536295B (en) | Object control method and device in virtual scene and computer equipment | |
| CN112843703B (en) | Information display method, device, terminal and storage medium | |
| WO2022237076A1 (en) | Method and apparatus for controlling avatar, and device and computer-readable storage medium | |
| CN117771649A (en) | Methods, devices, equipment and storage media for controlling virtual characters | |
| CN111760281B (en) | Cutscene playing method and device, computer equipment and storage medium | |
| US20230070612A1 (en) | Operation prompting method and apparatus, terminal, and storage medium | |
| CN117899473B (en) | Image frame display method, device, computer equipment and storage medium | |
| CN121041679A (en) | Virtual scene display method, device, computer equipment and storage medium | |
| CN112057861B (en) | Virtual object control method and device, computer equipment and storage medium | |
| CN119075309A (en) | Virtual item throwing method, device, computer equipment and storage medium | |
| CN118203841A (en) | Virtual object control method, device, terminal and storage medium | |
| CN119015693A (en) | Operation control method, device, equipment and computer readable storage medium | |
| CN116943208A (en) | Virtual object control method, device, computer equipment and storage medium | |
| CN115721935A (en) | Display method of map picture, database generation method, device and equipment | |
| CN114470763A (en) | Method, device, device and storage medium for displaying interactive screen | |
| CN118718391A (en) | Virtual scene display method, device, terminal and storage medium | |
| CN118059477A (en) | Picture display method, device, computer equipment and storage medium | |
| US20260027465A1 (en) | Virtual Object Selection Methods and Systems |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication |