[go: up one dir, main page]

CN108310768B - Virtual scene display method and device, storage medium and electronic device - Google Patents

Virtual scene display method and device, storage medium and electronic device Download PDF

Info

Publication number
CN108310768B
CN108310768B CN201810040140.XA CN201810040140A CN108310768B CN 108310768 B CN108310768 B CN 108310768B CN 201810040140 A CN201810040140 A CN 201810040140A CN 108310768 B CN108310768 B CN 108310768B
Authority
CN
China
Prior art keywords
area
virtual scene
distance
event
client
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810040140.XA
Other languages
Chinese (zh)
Other versions
CN108310768A (en
Inventor
童颜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201810040140.XA priority Critical patent/CN108310768B/en
Publication of CN108310768A publication Critical patent/CN108310768A/en
Application granted granted Critical
Publication of CN108310768B publication Critical patent/CN108310768B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • A63F13/525Changing parameters of virtual cameras
    • A63F13/5255Changing parameters of virtual cameras according to dedicated instructions from a player, e.g. using a secondary joystick to rotate the camera around a player's character
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • A63F13/426Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving on-screen location information, e.g. screen coordinates of an area at which the player is aiming with a light gun

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a display method and device of a virtual scene, a storage medium and an electronic device. Wherein, the method comprises the following steps: detecting a first event, wherein the first event is an event triggered on a client, and the client is used for controlling a target object in a virtual scene displayed on the client; in response to a first event, determining a first area in the virtual scene, wherein the first area is different from a second area, and the second area is an area where a target object in the virtual scene is located; the scene located in the first area and the second area is displayed on the client. The invention solves the technical problem that the scene area displayed by the terminal is smaller in the related technology.

Description

Virtual scene display method and device, storage medium and electronic device
Technical Field
The invention relates to the field of internet, in particular to a display method and device of a virtual scene, a storage medium and an electronic device.
Background
With the development of multimedia technology and the popularization of wireless networks, people's entertainment activities become more and more abundant, such as playing games through handheld media devices, playing stand-alone or networked games through computers, and various game types, such as barrage shooting games, adventure games, simulation games, role playing games, multi-player online tactical sports MOBA games, leisure chess and card games, and other games.
In most types of games, a player may choose to play with other players, such as the current market-driven MOBA game, and multiple players may participate in the game online at the same time. In the game process, a player can check the scene of the area where the player character is located, the screen of the mobile terminal is limited to be small, the scene area displayed on the mobile terminal is also small, and due to the fact that the displayed scene area is small, a plurality of problems can exist, the player cannot find enemies close to the player in time, and therefore game experience of the player is reduced.
Aiming at the technical problem that the scene area displayed by the terminal in the related technology is small, an effective solution is not provided at present.
Disclosure of Invention
The embodiment of the invention provides a method and a device for displaying a virtual scene, a storage medium and an electronic device, which are used for at least solving the technical problem that the scene area displayed by a terminal is smaller in the related technology.
According to an aspect of the embodiments of the present invention, there is provided a method for displaying a virtual scene, including: detecting a first event, wherein the first event is an event triggered on a client, and the client is used for controlling a target object in a virtual scene displayed on the client; in response to a first event, determining a first area in the virtual scene, wherein the first area is different from a second area, and the second area is an area where a target object in the virtual scene is located; the scene located in the first area and the second area is displayed on the client.
According to another aspect of the embodiments of the present invention, there is also provided a display apparatus of a virtual scene, including: the system comprises a detection unit, a processing unit and a display unit, wherein the detection unit is used for detecting a first event, the first event is an event triggered on a client, and the client is used for controlling a target object in a virtual scene displayed on the client; the determining unit is used for responding to a first event, determining a first area in the virtual scene, wherein the first area is different from a second area, and the second area is an area where a target object in the virtual scene is located; and the display unit is used for displaying the scenes in the first area and the second area on the client.
According to another aspect of the embodiments of the present invention, there is also provided a storage medium including a stored program which, when executed, performs the above-described method.
According to another aspect of the embodiments of the present invention, there is also provided an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor executes the above method through the computer program.
In the embodiment of the invention, when a first event is detected, a first area in a virtual scene is determined, wherein the first area is different from a second area, and the second area is an area where a target object in the virtual scene is located; the scenes in the first area and the second area are displayed on the client, and the range of the displayable scene area of the display interface in the client can be expanded through the first event, so that the technical problem that the scene area displayed by the terminal in the related technology is small can be solved, and the technical effect of expanding the range of the displayable scene area of the terminal is achieved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
fig. 1 is a schematic diagram of a hardware environment of a display method of a virtual scene according to an embodiment of the present invention;
FIG. 2 is a flow chart of an alternative method of displaying a virtual scene in accordance with an embodiment of the present invention;
FIG. 3 is a schematic illustration of an alternative game map according to an embodiment of the present invention;
FIG. 4 is a schematic illustration of an alternative game map according to an embodiment of the present invention;
FIG. 5 is a schematic illustration of an alternative game map according to an embodiment of the present invention;
FIG. 6 is a schematic illustration of an alternative game map according to an embodiment of the present invention;
FIG. 7 is a schematic view of an alternative game interface according to an embodiment of the present invention;
FIG. 8 is a schematic view of an alternative game interface according to an embodiment of the present invention;
FIG. 9 is a schematic view of an alternative game interface according to an embodiment of the present invention;
FIG. 10 is a flow chart of an alternative method of displaying a virtual scene in accordance with an embodiment of the present invention;
FIG. 11 is a schematic diagram of an alternative display device for a virtual scene in accordance with an embodiment of the invention; and
fig. 12 is a block diagram of a terminal according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
First, partial terms or terms appearing in the description of the embodiments of the present invention are applied to the following explanations:
MOBA: MOBA is an abbreviation of Multiplayer Online Battle Arena Games, Chinese translates into Multiplayer Online tactical sports Games.
UI layer: the UI is short for User Interface, and the UI layer includes icons in the Interface.
Operation layer: a game scene layer where a character such as hero is used by a player in a game.
According to an aspect of an embodiment of the present invention, an embodiment of a method for displaying a virtual scene is provided.
Alternatively, in the present embodiment, the display method of the virtual scene may be applied to a hardware environment formed by the server 101 and the terminal 103 as shown in fig. 1. As shown in fig. 1, a server 101 is connected to a terminal 103 through a network, which may be used to provide services (such as game services, application services, WEB services, etc.) for the terminal or a client installed on the terminal, and a database 105 may be provided on the services or independently from the server, and is used to provide data storage services for the server 101, and the network includes but is not limited to: the terminal 103 is not limited to a PC, a mobile phone, a tablet computer, etc. in a wide area network, a metropolitan area network, or a local area network. The virtual scene display method according to the embodiment of the present invention may be executed by the server 101, the terminal 103, or both the server 101 and the terminal 103. The terminal 103 may execute the method for displaying a virtual scene according to the embodiment of the present invention by a client installed thereon:
in step S102, a drag event (i.e., a first event) triggered by the player is detected on the client, such as two-finger sliding, where the sliding direction may be any one of up, down, left, and right.
Step S104, moving the third area B (i.e. the scene area currently displayed on the first sub-interface on the client) according to the dragging distance of the dragging event, so as to obtain a first area C, where the opposite role D may appear after moving.
And step S106, displaying the scenes in the first area C and the second area A on the client, wherein the second area A can be displayed unchanged, and the displayed scene in the third area is switched to the scene in the first area.
In other words, through the above steps, the scene of one part of the scene on the client can be kept still, and the scene of the other part of the scene can be moved, so that the player can observe various changes in the scene, such as changes of teammates, opponents, non-player controlled objects NPC, game environment and the like. Fig. 1 is only used to schematically illustrate the technical solution of the present application, and the following describes an embodiment of the present application in detail with reference to fig. 2.
Fig. 2 is a flowchart of a display method of an alternative virtual scene according to an embodiment of the present invention, and as shown in fig. 2, the method may include the following steps:
step S202, a first event is detected, where the first event is an event triggered on a client, and the client is used to control a target object in a virtual scene displayed on the client.
The client can be a client of a game application, a social application, an instant messaging application and the like; the client can be installed on terminals such as a mobile terminal, a PC (personal computer), a notebook computer, an intelligent television and the like, and the first event can be directly triggered on the terminal, such as a touch screen, a pressure screen, a control handle, an image acquisition device and the like of the terminal; or triggered by another terminal in communication connection with the terminal, such as a mobile phone connected with a PC, a smart television and the like.
The virtual scene can be a game scene in a game application, a social scene in a social application, and the like; the target object can be a player character controlled by a player in a game scene, and a user character controlled by a user in a social application.
Step S204, in response to the first event, determining a first area in the virtual scene, where the first area is different from a second area, and the second area is an area where the target object is located in the virtual scene.
Optionally, the first event may be multiple types of events (e.g., a drag event, a slide event, a click event, a press event, a gesture, etc.), and when the first area in the virtual scene is determined, event information of the first event, such as a slide distance, a click amount, a pressure value, etc., may be obtained according to the event type; a first region in the virtual scene is determined from the event information.
In step S206, scenes located in the first area and the second area are displayed on the client.
It should be noted that, for the client, the display interface of the client may be divided into at least two parts, where one part (the first sub-interface) is used to display the scene in the first area, and the other part (the second sub-interface) is used to display the scene in the second area, and the target object may exist in the center of the entire display interface (e.g., across the first area and the second area), or may exist only in the second area. Since the size of the screen is fixed, the range that can be displayed is also fixed, and the position of the target object in the screen is fixed (such as in the middle of the screen), which is equivalent to defining that the second area or the second area can only be an area within a fixed distance from the target object, thereby causing the displayable scene area of the terminal to be smaller.
Under the condition that the target object is not moved, the picture of the second sub-interface is not changed, at the moment, the player can trigger the picture in the first sub-interface to be changed through the various types of first events, for example, the player slides towards any one of four directions of the screen, so that the displayable scene area of the terminal is expanded, the visual range of the user is also expanded, and the user experience can be effectively improved.
Through the above steps S202 to S206, when the first event is detected, a first area in the virtual scene is determined, where the first area is different from a second area, and the second area is an area where the target object in the virtual scene is located; the scenes in the first area and the second area are displayed on the client, and the range of the displayable scene area of the display interface in the client can be expanded through the first event, so that the technical problem that the scene area displayed by the terminal in the related technology is small can be solved, and the technical effect of expanding the range of the displayable scene area of the terminal is achieved.
In the technical solution provided in step S202, a first event is detected, where the first event is an event triggered on a client, and the client is configured to control a target object in a virtual scene displayed on the client.
Optionally, the type of the detected first event includes, but is not limited to:
1) drag events or slide events detected on a touch screen, a pressure sensitive screen, a mouse, an image capture device, or the like;
2) click events detected on a touch screen, a pressure sensitive screen, a mouse, an image acquisition device and other devices;
3) a press event detected on a pressure sensitive screen or the like;
4) gesture operations (such as gesture operations of drawing circles, drawing triangles and the like) detected on equipment such as an image acquisition device and the like.
In the technical solution provided in step S204, when a first area in a virtual scene is determined, event information of a first event may be acquired; a first region in the virtual scene is determined from the event information.
Optionally, the acquired event information of the first event includes but is not limited to:
1) sliding event
The sliding event of the present application may be a single-finger sliding or a multi-finger sliding (e.g., a two-finger sliding, a three-finger sliding, etc.).
And acquiring event information of the sliding event on the client under the condition that the first event is the sliding event, wherein the event information comprises the sliding direction of the sliding event and also comprises a third distance, and the third distance is used for determining the first distance (namely the distance for moving the display scene area in the plane).
Alternatively, as shown in fig. 3, when the first area in the virtual scene is determined according to the event information, the position of the third area B in the virtual scene may be moved according to at least the direction indicated by the event information to obtain the first area, for example, each time the first area is slid, the third area B is translated by a fixed distance according to the sliding direction, and the position of the third area in the virtual scene is moved to obtain the first area, where the third area is the area where the scene displayed by the first sub-interface before the movement is located.
Optionally, moving the position of the third area in the virtual scene according to at least the direction indicated by the event information, and obtaining the first area may include: acquiring a first distance y indicated by the event information, wherein the first distance is a distance that the third area needs to move at the first moment, y is k x, x represents a third distance, and k represents a preset coefficient; moving the position of the third area in the virtual scene by the first distance in the direction indicated by the event information at the first time to obtain the first area, where the third area is the area where the scene displayed by the first sub-interface at the second time, and the second time is the time before the first time, as shown in fig. 3, when the third area moves by the distance y in the sliding direction, it is equivalent to moving the area C by the distance y downward, and the area C appears in the rendering range of the first sub-interface, as shown in fig. 4, at this time, B removes the rendering range of the first sub-interface, and the scene picture rendered in the second sub-interface is not changed.
Optionally, as shown in fig. 5, the player may also slide left and right, and after the third area moves to the sliding direction by the distance y, it is equivalent to move the area E (i.e. the aforementioned first area) by the distance y to the left, and appears in the rendering range of the first sub-interface, as shown in fig. 6, at this time, B removes the rendering range of the first sub-interface, and the scene picture rendered in the second sub-interface is not changed.
Optionally, the third distance may be a sliding distance of a fixed time period (e.g., 0.1 second, 0.2 second, 0.5 second, etc.) in the sliding event process, that is, the third distance is determined every other fixed time period, and then the corresponding first distance is moved; the third distance may also be the sliding distance of one complete sliding event.
It should be noted that the maximum region allowed to be viewed may also be preset according to the position of the target object, so that moving the position of the third region in the virtual scene according to at least the direction indicated by the event information may obtain the first region, and may include: acquiring a first distance indicated by the event information, wherein the first distance is a distance which needs to be moved by the third area at the first moment; under the condition that the first distance is smaller than the second distance, the position of the third area in the virtual scene is moved to the direction represented by the event information by the first distance to obtain the first area, the second distance is the distance between the third area and the boundary of the fourth area in the direction represented by the event information, the fourth area is an area which is allowed to be viewed in the virtual scene, and the third area is located in the fourth area; and under the condition that the first distance is not less than the second distance, moving the position of the third area in the virtual scene to the direction indicated by the event information by the second distance to obtain the first area.
2) And under the condition that the first event is the touch event, acquiring event information of the touch event on the client, wherein the event information comprises the touch position of the touch event.
The touch event may be a click event on the third sub-interface, and the corresponding event information includes a position of the click.
Determining the first region in the virtual scene according to the event information may include: acquiring a first position represented by event information; and determining the area where the first position in the virtual scene is located as a first area.
Optionally, the obtaining the first position represented by the event information includes: and acquiring a first position corresponding to the touch position represented by the event information, wherein the touch position is a trigger position of the event information in a third sub-interface, the third sub-interface is used for displaying a scene map of the virtual scene, and the first position is a position of the touch position mapped in the scene map. Determining that the area where the first position in the virtual scene is located is the first area comprises: determining an area of the first position in the scene map as a fifth area; and taking the part represented by the fifth area in the virtual scene as the first area. The fifth region here is the same size as the third region.
As shown in fig. 7, the area that needs to be displayed in the first sub-interface is clicked in the third sub-interface of the player, and the content displayed at the current time (corresponding to the user click position 701) is, for example, the content of the third area B in the map, and as shown in fig. 8, after the player finger position is changed to 703, the area displayed in the first sub-interface is repositioned, specifically, an area (i.e., a fifth area) with the finger touch position as the center.
Similarly, the player may also click in the first sub-interface, the clicked location being the center of the area displayed in the first sub-interface (first area).
The touch event may be a click event or a press event on the first sub-interface, and the corresponding event information includes a touch position, a number of clicks, or a degree of press (pressure value).
Optionally, moving the position of the third area in the virtual scene according to at least the direction indicated by the event information, and obtaining the first area may include: acquiring a first distance y indicated by the event information, wherein the first distance is a distance that the third area needs to move at a first moment, y is k × n, n represents the number of clicks or the pressing force degree, and k represents a preset coefficient; and moving the position of the third area in the virtual scene by a first distance in the direction indicated by the event information at the first moment to obtain the first area, wherein the third area is the area where the scene displayed by the first sub-interface at the second moment, and the second moment is the previous moment of the first moment.
The direction may be determined according to the touch position, for example, if the first sub-interface is divided into a plurality of regions, and each region represents a moving direction (e.g., up, down, left, and right), then the moving direction may be determined according to the region where the touch position is located.
As shown in fig. 9, the first sub-interface is divided into four areas 901, 903, 905 and 907, which correspond to four moving directions of downward, rightward, upward and leftward, respectively, and when the player clicks the area 905, the area D in the scene will move upward to become the first area.
Alternatively, the division of the area and the direction indicated by the area may also be determined in other manners, and the above embodiment is only used for illustration.
Similar to the above embodiment, the number of clicks or the pressure value may be a number of clicks or a pressure value within a fixed time period (e.g., 0.1 second, 0.2 second, 0.5 second, etc.) during the touch event, that is, the number of clicks or the pressure value may be determined every other fixed time period, and then the corresponding first distance is moved; the number of clicks or the pressure value may also be the number of clicks or the pressure value of one complete touch event, for example, as long as the time interval between every two clicks is less than 0.1 second, the two clicks may be regarded as belonging to the same touch event.
Optionally, a configuration interface is arranged on the client, and a configuration interface corresponding to the configuration interface may be displayed on the client, where the configuration interface is used to configure the sensitivity parameter k, and the first distance is a product of the third distance (or the click point, the pressure value) and the sensitivity parameter.
In the technical solution provided in step S206, scenes located in the first area and the second area are displayed on the client.
Optionally, displaying the scene located in the first area and the second area on the client includes: and displaying the scene in the first area on a first sub-interface in a display interface of the client, and displaying the scene in the second area on a second sub-interface in the display interface, wherein before the first event is detected, the scene of the area where the target object is located in the virtual scene is displayed on the first sub-interface and the second sub-interface.
As shown in fig. 3, before the first event (i.e., the double-finger down-sliding of the player) is detected for the first time, the contents displayed on the endpoint display interface are the scene contents in the areas a and B (the second area and the third area), which can be centered on the game character in fig. 3 (i.e., the second area and the third area are the areas where the target objects are located in the virtual scene), and when the first event is detected for the first time, the area displayed on the first sub-interface is switched from the third area to the first area C. And the content displayed by the second sub-interface is still the scene in the second area, or the scenes of the second area and the third area are reduced and then displayed in the second sub-interface.
It should be noted that, if the first event is not detected for the first time, but the nth time (N is greater than or equal to 2), the third area B may not be the area where the target object is located in the virtual scene, but may be the area after the movement is triggered by the previous first event. When the first event is detected, the area still displayed by the first sub-interface is switched from the third area to the first area C, and the content displayed by the second sub-interface is still the scene in the second area, or the scenes of the second area and the third area are reduced and then displayed in the second sub-interface.
As an alternative embodiment, a game in which the technical solution of the present application is applied to a mobile terminal is described below as an example.
The problem that the battlefield condition cannot be quickly and timely checked in the process of fighting in the mobile phone games (such as moba games) always troubles the development of the mobile phone games. Therefore, the displayable range of the screen can be expanded by adopting the two-finger lens moving mode on the right of the mobile phone screen based on the technical scheme in the mobile phone game, a player can quickly check the conditions of the surrounding battlefield at any time while moving, and the problems of insensitivity and uncomfortable operation of moving the small map lens are solved.
In any area (such as the lower right area) of the screen, a player can slide in the area by utilizing two fingers, then can move a lens at a certain distance, quickly switch to a target area to check the battlefield state, and can also set the sensitivity of the lens by setting so as to meet the operating habits of different players.
The lens moving area can be arranged on the right side of the screen, the area is a black frame range, and the lens moving area can be always in an invisible state (namely, the black frame is not displayed); lens moving mode: the player slides in the lens moving area by using two fingers, the lens moves along with the sliding direction of the fingers, the lens moving function can be triggered by the sliding direction of the two fingers in the lens moving area, and the maximum range is reached after the fingers move out of the red frame. While sliding, other button interactions of the movement region are not affected. As long as the player does not loosen the hand, the lens stays in the moving area of the lens all the time, and after the hand is loosened, the lens automatically returns to the hero head; the relationship between the distance of lens sliding and the distance of finger moving is that the distance of lens moving is the distance of finger sliding and the lens sensitivity coefficient k, the coefficient k has a range, and the player can configure the distance in the setting.
As shown in fig. 10:
step S1002, determining whether the player starts the two-finger lens moving function in the setting, if not, executing step S1004, and if yes, executing step S1006.
In step S1004, the lens shift is not triggered when the player slides in the lens shift area.
In step S1006, when the player sets the lens sensitivity coefficient k, the client records the lens sensitivity coefficient k set by the player.
In step S1008, the entire right side of the UI layer screen becomes the operation area of the lens shift, and when the two fingers of the player slide in the lens shift area, the client detects the sliding operation of the player in the lens shift area.
In step S1010, the remaining responses in the lens shift region except for the lens shift are excluded.
In step S1012, the client records two parameters, records the sliding distance n of the midpoint between the two fingers in the lens moving area, and records the sliding direction a of the midpoint between the two fingers in the lens moving area.
In step S1014, the distance y of the lens moving on the operation layer is calculated in accordance with the coefficient k.
The lens moving distance y is the sliding distance x between the two fingers and the lens sensitivity coefficient k, and the moving distance y of the lens on the operation layer is calculated (for example, on the operation layer, the game character moves to complete the scene layer of each UI layer operation).
In step S1016, the lens is controlled to move in the direction a by a distance y on the operation layer.
According to the technical scheme, a hidden lens moving area is divided from the whole area on the right side of the screen, the hidden lens moving area is overlapped with areas such as skill application and the like, when two fingers slide in the area, only the operation of lens moving can be responded, and other operations in the area cannot be responded. Through the technical scheme, the mobile phone game has a quick lens moving mode, the lens moving response is enlarged, the game is more convenient to use, a player can quickly check surrounding combat situations through the lens moving scheme, the current movement is not required to be stopped, the player of the mobile phone game can move and observe remote combat situations, the left hand moves, the right hand moves the lens to be possible, and the problem that the prior observation combat situations can only pass through a small map and cannot be operated simultaneously is solved.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
According to another aspect of the embodiment of the present invention, there is also provided a display apparatus of a virtual scene for implementing the display method of a virtual scene, which is applicable to the user terminal 103. Fig. 11 is a schematic diagram of an alternative display device for a virtual scene according to an embodiment of the present invention, and as shown in fig. 11, the device may include: a detection unit 1101, a determination unit 1103, and a display unit 1105.
The detecting unit 1101 is configured to detect a first event, where the first event is an event triggered on a client, and the client is configured to control a target object in a virtual scene displayed on the client.
The client can be a client of a game application, a social application, an instant messaging application and the like; the client can be installed on terminals such as a mobile terminal, a PC (personal computer), a notebook computer, an intelligent television and the like, and the first event can be directly triggered on the terminal, such as a touch screen, a pressure screen, a control handle, an image acquisition device and the like of the terminal; or triggered by another terminal in communication connection with the terminal, such as a mobile phone connected with a PC, a smart television and the like.
The virtual scene can be a game scene in a game application, a social scene in a social application, and the like; the target object can be a player character controlled by a player in a game scene, and a user character controlled by a user in a social application.
A determining unit 1103, configured to determine, in response to the first event, a first region in the virtual scene, where the first region is different from a second region, where the target object is located in the virtual scene.
Optionally, the first event may be multiple types of events (e.g., a drag event, a slide event, a click event, a press event, a gesture, etc.), and when the first area in the virtual scene is determined, event information of the first event, such as a slide distance, a click amount, a pressure value, etc., may be obtained according to the event type; a first region in the virtual scene is determined from the event information.
A display unit 1105 configured to display the scenes located in the first area and the second area on the client.
It should be noted that, for the client, the display interface of the client may be divided into at least two parts, where one part (the first sub-interface) is used to display the scene in the first area, and the other part (the second sub-interface) is used to display the scene in the second area, and the target object may exist in the center of the entire display interface (e.g., across the first area and the second area), or may exist only in the second area. Since the size of the screen is fixed, the range that can be displayed is also fixed, and the position of the target object in the screen is fixed (such as in the middle of the screen), which is equivalent to defining that the second area can only be an area within a fixed distance from the target object, thereby causing the displayable scene area of the terminal to be smaller.
Under the condition that the target object is not moved, the picture of the second sub-interface is not changed, at the moment, the player can trigger the picture in the first sub-interface to be changed through the various types of first events, for example, the picture is dragged towards any one of four directions of a screen, so that the displayable scene area of the terminal is expanded, the visual range of a user is also expanded, and the user experience can be effectively improved.
It should be noted that the detecting unit 1101 in this embodiment may be configured to execute step S202 in this embodiment, the determining unit 1103 in this embodiment may be configured to execute step S204 in this embodiment, and the displaying unit 1105 in this embodiment may be configured to execute step S206 in this embodiment.
It should be noted here that the modules described above are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the disclosure of the above embodiments. It should be noted that the modules described above as a part of the apparatus may operate in a hardware environment as shown in fig. 1, and may be implemented by software or hardware.
Through the module, when a first event is detected, a first area in the virtual scene is determined, wherein the first area is different from a second area, and the second area is an area where a target object in the virtual scene is located; the scenes in the first area and the second area are displayed on the client, and the range of the displayable scene area of the display interface in the client can be expanded through the first event, so that the technical problem that the scene area displayed by the terminal in the related technology is small can be solved, and the technical effect of expanding the range of the displayable scene area of the terminal is achieved.
Optionally, the display unit described above may also be used to: and displaying the scene in the first area on a first sub-interface in a display interface of the client, and displaying the scene in the second area on a second sub-interface in the display interface, wherein before the first event is detected, the scene of the area where the target object is located in the virtual scene is displayed on the first sub-interface and the second sub-interface.
The determination unit of the present application may include: the acquisition module is used for acquiring event information of a first event; and the determining module is used for determining a first area in the virtual scene according to the event information.
The determination module described above may be further configured to: and moving the position of the third area in the virtual scene at least according to the direction indicated by the event information to obtain a first area, wherein the first area is obtained after the position of the third area in the virtual scene is moved, and the third area is an area where the scene displayed on the first sub-interface before the movement is located.
Optionally, when the determining module performs the step of moving the position of the third area in the virtual scene according to at least the direction indicated by the event information to obtain the first area, the determining module may obtain a first distance indicated by the event information, where the first distance is a distance that the third area needs to move at the first time; and moving the position of a third area in the virtual scene by a first distance in the direction indicated by the event information at the first moment to obtain the first area, wherein the third area is the area where the scene displayed by the first sub-interface at the second moment, and the second moment is the previous moment of the first moment.
Optionally, when the determining module performs the step of moving the position of the third area in the virtual scene according to at least the direction indicated by the event information to obtain the first area, the determining module may obtain a first distance indicated by the event information, where the first distance is a distance that the third area needs to move at the first time; under the condition that the first distance is smaller than the second distance, moving the position of the third area in the virtual scene to the direction represented by the event information by the first distance to obtain the first area, wherein the second distance is the distance between the third area and the boundary of the fourth area in the direction represented by the event information, the fourth area is an area which is allowed to be viewed in the virtual scene, and the third area is located in the fourth area; and under the condition that the first distance is not less than the second distance, moving the position of the third area in the virtual scene to the direction indicated by the event information by the second distance to obtain the first area.
Optionally, the determining module may obtain a first position represented by the event information when performing the step of determining the first region in the virtual scene according to the event information; and determining the area where the first position in the virtual scene is located as a first area.
The determining module may be further configured to acquire a first position corresponding to the touch position represented by the event information, where the touch position is a trigger position of the event information in a third sub-interface, the third sub-interface is configured to display a scene map of a virtual scene, the first position is a position of the touch position mapped in the scene map, and an area where the first position is located in the scene map is determined to be a fifth area; and taking the part represented by the fifth area in the virtual scene as the first area.
The acquisition module of the present application is further operable to: under the condition that the first event is a sliding event, acquiring event information of the sliding event on the client, wherein the event information comprises a sliding direction and a third distance of the sliding event, and the third distance is used for determining the first distance; and under the condition that the first event is the touch event, acquiring event information of the touch event on the client, wherein the event information comprises the touch position of the touch event.
According to the technical scheme, a hidden lens moving area is divided from the whole area on the right side of the screen, the hidden lens moving area is overlapped with areas such as skill application and the like, when two fingers slide in the area, only the operation of lens moving can be responded, and other operations in the area cannot be responded. Through the technical scheme, the mobile phone game has a quick lens moving mode, the lens moving response is enlarged, the game is more convenient to use, a player can quickly check surrounding combat situations through the lens moving scheme, the current movement is not required to be stopped, the player of the mobile phone game can move and observe remote combat situations, the left hand moves, the right hand moves the lens to be possible, and the problem that the prior observation combat situations can only pass through a small map and cannot be operated simultaneously is solved.
It should be noted here that the modules described above are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the disclosure of the above embodiments. It should be noted that the modules described above as a part of the apparatus may be operated in a hardware environment as shown in fig. 1, and may be implemented by software, or may be implemented by hardware, where the hardware environment includes a network environment.
According to another aspect of the embodiment of the present invention, a server or a terminal for implementing the display method of the virtual scene is also provided.
Fig. 12 is a block diagram of a terminal according to an embodiment of the present invention, and as shown in fig. 12, the terminal may include: one or more processors 1201 (only one is shown in fig. 12), a memory 1203, and a transmission means 1205 (such as the transmission means in the above embodiments), as shown in fig. 12, the terminal may further include an input-output device 1207.
The memory 1203 may be used to store software programs and modules, such as program instructions/modules corresponding to the method and apparatus for displaying a virtual scene in the embodiment of the present invention, and the processor 1201 executes various functional applications and data processing by running the software programs and modules stored in the memory 1203, that is, implements the above-mentioned method for displaying a virtual scene. The memory 1203 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 1203 may further include memory located remotely from the processor 1201, which may be connected to the terminal through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The above-mentioned transmission means 1205 is used for receiving or sending data via a network, and may also be used for data transmission between the processor and the memory. Examples of the network may include a wired network and a wireless network. In one example, the transmission device 1205 includes a Network adapter (NIC) that can be connected to a router via a Network cable and other Network devices to communicate with the internet or a local area Network. In one example, the transmission device 1205 is a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
Among them, the memory 1203 is specifically used for storing an application program.
The processor 1201 may invoke an application stored in the memory 1203 via the transmission 1205 to perform the following steps:
detecting a first event, wherein the first event is an event triggered on a client, and the client is used for controlling a target object in a virtual scene displayed on the client;
in response to a first event, determining a first area in the virtual scene, wherein the first area is different from a second area, and the second area is an area where a target object in the virtual scene is located;
the scene located in the first area and the second area is displayed on the client.
The processor 1201 is further configured to perform the following steps:
acquiring a first distance indicated by the event information, wherein the first distance is a distance which needs to be moved by the third area at the first moment;
under the condition that the first distance is smaller than the second distance, moving the position of the third area in the virtual scene to the direction represented by the event information by the first distance to obtain the first area, wherein the second distance is the distance between the third area and the boundary of the fourth area in the direction represented by the event information, the fourth area is an area which is allowed to be viewed in the virtual scene, and the third area is located in the fourth area;
and under the condition that the first distance is not less than the second distance, moving the position of the third area in the virtual scene to the direction indicated by the event information by the second distance to obtain the first area.
By adopting the embodiment of the invention, when a first event is detected, a first area in the virtual scene is determined, wherein the first area is different from a second area, and the second area is an area where a target object in the virtual scene is located; the scenes in the first area and the second area are displayed on the client, and the range of the displayable scene area of the display interface in the client can be expanded through the first event, so that the technical problem that the scene area displayed by the terminal in the related technology is small can be solved, and the technical effect of expanding the range of the displayable scene area of the terminal is achieved.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments, and this embodiment is not described herein again.
It can be understood by those skilled in the art that the structure shown in fig. 12 is only an illustration, and the terminal may be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palm computer, and a Mobile Internet Device (MID), a PAD, etc. Fig. 12 is a diagram illustrating a structure of the electronic device. For example, the terminal may also include more or fewer components (e.g., network interfaces, display devices, etc.) than shown in FIG. 12, or have a different configuration than shown in FIG. 12.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the storage medium may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The embodiment of the invention also provides a storage medium. Alternatively, in this embodiment, the storage medium may be a program code for executing a display method of a virtual scene.
Optionally, in this embodiment, the storage medium may be located on at least one of a plurality of network devices in a network shown in the above embodiment.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps:
s12, detecting a first event, wherein the first event is an event triggered on a client, and the client is used for controlling a target object in a virtual scene displayed on the client;
s14, responding to the first event, determining a first area in the virtual scene, wherein the first area is different from a second area, and the second area is an area where the target object is located in the virtual scene;
s16, the scene located in the first area and the second area is displayed on the client.
Optionally, the storage medium is further arranged to store program code for performing the steps of:
s22, acquiring a first distance indicated by the event information, wherein the first distance is a distance that the third area needs to move at the first moment;
s24, when the first distance is smaller than a second distance, moving the position of the third area in the virtual scene to the direction indicated by the event information by the first distance to obtain the first area, where the second distance is a distance between the third area and a boundary of a fourth area in the direction indicated by the event information, the fourth area is an area allowed to be viewed in the virtual scene, and the third area is located in the fourth area;
s26, if the first distance is not less than the second distance, the position of the third area in the virtual scene is moved by the second distance in the direction indicated by the event information, and the first area is obtained.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments, and this embodiment is not described herein again.
Optionally, in this embodiment, the storage medium may include, but is not limited to: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
The integrated unit in the above embodiments, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in the above computer-readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing one or more computer devices (which may be personal computers, servers, network devices, etc.) to execute all or part of the steps of the method according to the embodiments of the present invention.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (13)

1. A method for displaying a virtual scene, comprising:
detecting a first event, wherein the first event is an event triggered on a client, and the client is used for controlling a target object in a virtual scene displayed on the client;
under the condition that the first event is a sliding event, acquiring the event information of the sliding event on the client, wherein the event information comprises the sliding direction and a third distance of the sliding event, and calculating by using the third distance to obtain a first distance;
moving the position in the virtual scene currently displayed on the client to the direction represented by the event information by the first distance, and determining a first area in the virtual scene, wherein the first area is different from a second area, and the second area is an area where the target object is located in the virtual scene;
displaying a scene located within the first area and the second area on the client.
2. The method of claim 1, wherein displaying the scene in the first area and the second area on the client comprises:
and displaying the scene in the first area on a first sub-interface in a display interface of the client, and displaying the scene in the second area on a second sub-interface in the display interface.
3. The method of claim 2, wherein prior to detecting the first event, the method further comprises:
and displaying the scene of the area where the target object is located in the virtual scene on the first sub-interface and the second sub-interface.
4. The method of claim 1, wherein moving the position in the virtual scene currently displayed on the client by the first distance in the direction indicated by the event information, and wherein determining the first area in the virtual scene comprises:
and moving the position of a third area in the virtual scene at least according to the direction indicated by the event information to obtain the first area, wherein the first area is obtained after the position of the third area in the virtual scene is moved, the virtual scene displayed in the third area is the virtual scene currently displayed on the current client, and the third area is the area where the scene displayed on the first sub-interface before the movement is located.
5. The method of claim 4, wherein moving the position of the third area in the virtual scene according to at least the direction indicated by the event information, and obtaining the first area comprises:
acquiring the first distance indicated by the event information, wherein the first distance is a distance which the third area needs to move at a first moment;
and moving the position of the third area in the virtual scene to the direction represented by the event information by the first distance at the first moment to obtain the first area, wherein the third area is an area where the scene displayed by the first sub-interface at the second moment is located, and the second moment is a moment before the first moment.
6. The method of claim 4, wherein moving the position of the third area in the virtual scene according to at least the direction indicated by the event information, and obtaining the first area comprises:
acquiring a first distance indicated by the event information, wherein the first distance is a distance which the third area needs to move at a first moment;
if the first distance is smaller than a second distance, moving the position of the third area in the virtual scene to the direction represented by the event information by the first distance to obtain the first area, wherein the second distance is the distance between the third area and the boundary of a fourth area in the direction represented by the event information, the fourth area is an area allowed to be viewed in the virtual scene, and the third area is located in the fourth area;
and under the condition that the first distance is not smaller than the second distance, moving the position of the third area in the virtual scene to the direction represented by the event information by the second distance to obtain the first area.
7. The method of claim 1, wherein moving the position in the virtual scene currently displayed on the client by the first distance in the direction indicated by the event information, and wherein determining the first area in the virtual scene comprises:
acquiring a first position represented by the event information;
and determining the area where the first position in the virtual scene is located as the first area.
8. The method of claim 7,
acquiring the first position represented by the event information comprises: acquiring the first position corresponding to the touch position represented by the event information, wherein the touch position is a trigger position of the event information in a third sub-interface, the third sub-interface is used for displaying a scene map of the virtual scene, and the first position is a position of the touch position mapped in the scene map,
determining that the area where the first position in the virtual scene is located is the first area comprises: determining that the area of the first position in the scene map is a fifth area; and taking the part of the virtual scene represented by the fifth area as the first area.
9. The method of claim 1, further comprising:
and displaying a configuration interface on the client, wherein the configuration interface is used for configuring a sensitivity parameter, and the first distance is a product of the third distance and the sensitivity parameter.
10. A display device for a virtual scene, comprising:
the system comprises a detection unit, a processing unit and a display unit, wherein the detection unit is used for detecting a first event, the first event is an event triggered on a client, and the client is used for controlling a target object in a virtual scene displayed on the client;
the device is further configured to, when the first event is a sliding event, obtain the event information of the sliding event on the client, where the event information includes a sliding direction and a third distance of the sliding event, and calculate a first distance using the third distance;
a determining unit, configured to move a position in the virtual scene currently displayed on the client by the first distance in a direction indicated by the event information, and determine a first region in the virtual scene, where the first region is different from a second region, and the second region is a region in which the target object is located in the virtual scene, and move the position in the virtual scene currently displayed on the client by the first distance in the direction indicated by the event information, so as to obtain the first region;
a display unit for displaying the scenes in the first area and the second area on the client.
11. The apparatus of claim 10, wherein the determining unit is further configured to:
and moving the position of a third area in the virtual scene at least according to the direction indicated by the event information to obtain the first area, wherein the first area is obtained after the position of the third area in the virtual scene is moved, the virtual scene displayed in the third area is the virtual scene currently displayed on the current client, and the third area is the area where the scene displayed on the first sub-interface before the movement is located.
12. A storage medium, characterized in that the storage medium comprises a stored program, wherein the program when executed performs the method of any of the preceding claims 1 to 9.
13. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor executes the method of any of the preceding claims 1 to 9 by means of the computer program.
CN201810040140.XA 2018-01-16 2018-01-16 Virtual scene display method and device, storage medium and electronic device Active CN108310768B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810040140.XA CN108310768B (en) 2018-01-16 2018-01-16 Virtual scene display method and device, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810040140.XA CN108310768B (en) 2018-01-16 2018-01-16 Virtual scene display method and device, storage medium and electronic device

Publications (2)

Publication Number Publication Date
CN108310768A CN108310768A (en) 2018-07-24
CN108310768B true CN108310768B (en) 2020-04-07

Family

ID=62893519

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810040140.XA Active CN108310768B (en) 2018-01-16 2018-01-16 Virtual scene display method and device, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN108310768B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109432775B (en) * 2018-11-09 2022-05-17 网易(杭州)网络有限公司 Split screen display method and device of game map
CN109675308A (en) * 2019-01-10 2019-04-26 网易(杭州)网络有限公司 Display control method, device, storage medium, processor and terminal in game
CN111913674B (en) * 2019-05-07 2024-07-26 广东虚拟现实科技有限公司 Virtual content display method, device, system, terminal equipment and storage medium
CN110860082B (en) * 2019-11-20 2023-04-07 网易(杭州)网络有限公司 Identification method, identification device, electronic equipment and storage medium
CN111589142B (en) * 2020-05-15 2023-03-21 腾讯科技(深圳)有限公司 Virtual object control method, device, equipment and medium
CN114972602B (en) * 2022-06-14 2024-10-11 中国电信股份有限公司 Method for rendering data, storage medium and electronic device
CN115981518B (en) * 2023-03-22 2023-06-02 北京同创蓝天云科技有限公司 VR demonstration user operation method and related equipment

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1110585A2 (en) * 1999-12-14 2001-06-27 KCEO Inc. A video game apparatus, game image display control method, and readable storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030153374A1 (en) * 2002-02-12 2003-08-14 Anell Gilmore Interactive video racing game
CN104765905B (en) * 2015-02-13 2018-08-03 上海同筑信息科技有限公司 Plan view and the first visual angle split screen synchronous display method based on BIM and system
CN105208368A (en) * 2015-09-23 2015-12-30 北京奇虎科技有限公司 Method and device for displaying panoramic data
CN105760076B (en) * 2016-02-03 2018-09-04 网易(杭州)网络有限公司 game control method and device
CN105808071B (en) * 2016-03-31 2019-03-29 联想(北京)有限公司 A kind of display control method, device and electronic equipment

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1110585A2 (en) * 1999-12-14 2001-06-27 KCEO Inc. A video game apparatus, game image display control method, and readable storage medium

Also Published As

Publication number Publication date
CN108310768A (en) 2018-07-24

Similar Documents

Publication Publication Date Title
CN108310768B (en) Virtual scene display method and device, storage medium and electronic device
CN110812838B (en) Virtual unit control method and device in game and electronic equipment
CN111185004B (en) Game control display method, electronic device and storage medium
CN109675307B (en) In-game display control method, device, storage medium, processor and terminal
CN105159687B (en) A kind of information processing method, terminal and computer-readable storage medium
CN111773711B (en) Game view angle control method and device, storage medium and electronic device
CN112370781A (en) Operation control method and apparatus, storage medium, and electronic device
US20220296997A1 (en) Object display method and apparatus, and storage medium
CN108421255B (en) Scene image display method and device, storage medium and electronic device
CN111330268B (en) Control method and device of virtual prop, storage medium and electronic device
CN110075522A (en) The control method of virtual weapons, device and terminal in shooting game
CN114225372B (en) Virtual object control method, device, terminal, storage medium and program product
CN113893540B (en) Information prompting method and device, storage medium and electronic equipment
KR102744780B1 (en) Method and apparatus, device, medium, and product for selecting a virtual target interaction mode
CN112402971B (en) Virtual object control method, device, computer equipment and storage medium
JP7408685B2 (en) Methods, devices, equipment and storage media for adjusting the position of controls in application programs
CN108619717B (en) Method and device for determining operation object, storage medium and electronic device
CN113941143A (en) Virtual card processing method, non-volatile storage medium and electronic device
CN116617651A (en) An interactive control method, device, computer equipment and storage medium
KR101404635B1 (en) Method for processing a drag input in online game
CN114653059B (en) Method, device and non-volatile storage medium for controlling virtual characters in games
CN113813599A (en) Control method and device of virtual role, storage medium and electronic equipment
CN111258489B (en) Operation method and device of virtual prop and storage medium
CN116999810A (en) Virtual object control method and device, storage medium and electronic equipment
CN115888068A (en) Control method and device of virtual role, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant