[go: up one dir, main page]

CN116016963A - Live broadcast picture display method and device - Google Patents

Live broadcast picture display method and device Download PDF

Info

Publication number
CN116016963A
CN116016963A CN202211538639.6A CN202211538639A CN116016963A CN 116016963 A CN116016963 A CN 116016963A CN 202211538639 A CN202211538639 A CN 202211538639A CN 116016963 A CN116016963 A CN 116016963A
Authority
CN
China
Prior art keywords
target
control mode
virtual
character
role
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211538639.6A
Other languages
Chinese (zh)
Inventor
夏涛
张恒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Bilibili Technology Co Ltd
Original Assignee
Shanghai Bilibili Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Bilibili Technology Co Ltd filed Critical Shanghai Bilibili Technology Co Ltd
Priority to CN202211538639.6A priority Critical patent/CN116016963A/en
Publication of CN116016963A publication Critical patent/CN116016963A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The application provides a live broadcast picture display method and device, wherein the live broadcast picture display method comprises the following steps: displaying an initial virtual picture generated by a target virtual character based on a first control mode in a target live broadcasting room; receiving a control mode switching request generated based on a preset triggering condition, wherein the control mode switching request comprises switching information; switching the first control mode into a second control mode based on the switching information, wherein the first control mode and the second control mode comprise a device control mode or an action control mode or a semi-automatic control mode; acquiring role data according to the second control mode, and mapping the role data to the target virtual role to obtain a target virtual picture; and displaying the target virtual picture in the target live broadcasting room.

Description

Live broadcast picture display method and device
Technical Field
The application relates to the technical field of computers, in particular to a live broadcast picture display method. The application also relates to a live view display apparatus, a computing device, and a computer-readable storage medium.
Background
With the continuous development of computer technology, the network live broadcast technology is also continuously improved; in order to enrich the live broadcast form, the virtual roles can be controlled by a real anchor to complete live broadcast, so that the viewing experience of a viewing user is improved.
However, the current method of driving the avatar in the live broadcasting room by the anchor may be based on capturing the action or facial expression of the real anchor to drive the avatar, or based on the associated devices such as a keyboard and a mouse to drive the avatar, so that the operation mode of the avatar is single at the time.
Therefore, how to enrich the operation modes of the avatar in the live scene is a technical problem to be solved by the technicians in the field.
Disclosure of Invention
In view of this, the embodiment of the application provides a live-broadcast picture display method. The application relates to a live broadcast picture display device, a computing device and a computer readable storage medium, so as to solve the problem of lack of diversified virtual character driving modes in the prior art.
According to a first aspect of an embodiment of the present application, there is provided a live view display method, including:
displaying an initial virtual picture generated by a target virtual character based on a first control mode in a target live broadcasting room;
receiving a control mode switching request generated based on a preset triggering condition, wherein the control mode switching request comprises switching information;
switching the first control mode into a second control mode based on the switching information, wherein the first control mode and the second control mode comprise a device control mode or an action control mode or a semi-automatic control mode;
Acquiring role data according to the second control mode, and mapping the role data to the target virtual role to obtain a target virtual picture;
and displaying the target virtual picture in the target live broadcasting room.
According to a second aspect of embodiments of the present application, there is provided a live-broadcast picture display apparatus, including:
the first display module is configured to display an initial virtual picture generated by the target virtual character based on the first control mode in the target live broadcasting room;
the receiving module is configured to receive a control mode switching request, wherein the control mode switching request comprises switching information;
a switching module configured to switch the first control mode to a second control mode based on the switching information, wherein the first control mode and the second control mode include a device control mode or an action control mode or a semiautomatic control mode;
the mapping module is configured to acquire character data according to the second control mode, map the character data to the target virtual character and acquire a target virtual picture;
and the second display module is configured to display the target virtual picture in the target live broadcasting room.
According to a third aspect of embodiments of the present application, there is provided a computing device including a memory, a processor, and computer instructions stored on the memory and executable on the processor, the processor implementing the steps of the live view presentation method when executing the computer instructions.
According to a fourth aspect of embodiments of the present application, there is provided a computer readable storage medium storing computer instructions which, when executed by a processor, implement the steps of the live view presentation method.
According to the live broadcast picture display method, initial virtual pictures generated by the target virtual roles based on the first control mode are displayed in the target live broadcast room; receiving a control mode switching request generated based on a preset triggering condition, wherein the control mode switching request comprises switching information; switching the first control mode into a second control mode based on the switching information, wherein the first control mode and the second control mode comprise a device control mode or an action control mode or a semi-automatic control mode; acquiring role data according to the second control mode, and mapping the role data to the target virtual role to obtain a target virtual picture; and displaying the target virtual picture in the target live broadcasting room.
According to the method and the device for displaying the target virtual pictures in the live broadcasting room, under the condition that the initial virtual pictures generated based on the first control mode are displayed in the target live broadcasting room, the control mode can be switched to the second control mode based on the control mode switching request, and the target virtual pictures in the second control mode are displayed, so that the control modes of virtual roles are enriched.
Drawings
Fig. 1 is a schematic view of a live view display method according to an embodiment of the present application;
fig. 2a is a flowchart of a live view display method according to an embodiment of the present application;
fig. 2b is a schematic flow chart of a detection camera device according to an embodiment of the present application;
fig. 3 is a process flow diagram of a live view display method applied to a game scene according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a live broadcast display device according to an embodiment of the present disclosure;
FIG. 5 is a block diagram of a computing device according to one embodiment of the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. This application is, however, susceptible of embodiment in many other ways than those herein described and similar generalizations can be made by those skilled in the art without departing from the spirit of the application and the application is therefore not limited to the specific embodiments disclosed below.
The terminology used in one or more embodiments of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of one or more embodiments of the application. As used in this application in one or more embodiments and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used in one or more embodiments of the present application refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that, although the terms first, second, etc. may be used in one or more embodiments of the present application to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, a first may also be referred to as a second, and similarly, a second may also be referred to as a first, without departing from the scope of one or more embodiments of the present application. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "responsive to a determination", depending on the context.
First, terms related to one or more embodiments of the present application will be explained.
Dynamic catching: limb movements of the observed subject (human or animal) are recorded by various technical means.
And (3) surface catching: facial movements of an observation object (human or animal) are recorded by various technical means.
At present, live broadcasting is mainly performed in a virtual scene by utilizing a dynamic capturing/surface capturing technology, and a diversified virtual image driving mode is lacked.
According to the scheme, the dynamic capturing and surface capturing modes and the automatic switching and flexible configuration of the operation of the control equipment such as a keyboard and a mouse are adopted, so that more diversified virtual image driving and scene interaction are realized.
In the present application, a live view display method is provided, and the present application relates to a live view display device, a computing device, and a computer readable storage medium, which are described in detail in the following embodiments one by one.
Fig. 1 shows a scene schematic diagram of a live-broadcast picture display method according to an embodiment of the present application, which is specifically as follows:
it should be noted that, the live broadcast picture display method of the present application may be applied to a multicast live broadcast scene, where each anchor may control a virtual character in a virtual scene through an anchor client. The multicast live broadcast scene refers to a scene in which a plurality of anchor can interact in the same virtual scene.
The target anchor starts a target live broadcasting room, wherein the target live broadcasting room can display virtual roles of the target anchor; three virtual character control modes can be contained in the target living room:
the control mode 1 is a device-only control mode, if the target anchor uses a computer end to carry out live broadcast, the virtual character can be controlled based on control devices such as a mouse, a keyboard and the like, for example, the movement of the virtual character is realized based on a W, A, S, D key in the keyboard, the sliding of the mouse changes the visual angle of the virtual character in a virtual scene, the taking off of a space, the pressing of shift fast running, the squatting of a C key, the interaction of F picked objects and the like; if the target anchor uses the mobile phone terminal to carry out live broadcast, virtual roles can be controlled based on virtual equipment such as virtual rockers, virtual auxiliary rockers, icon buttons and the like; the control of the avatar may include control of the avatar itself, such as controlling the avatar to move forward, jump, etc.; control of interactions of virtual characters with virtual objects in a virtual scene may also be included, such as interactions with other virtual characters in a virtual scene, interactions with props in a virtual scene, and so forth.
The control mode 2 is an action control mode only, and can be started if the target anchor uses a client containing the image acquisition equipment; in the mode, an image acquisition device acquires a host image of a target host, and analyzes the host image to obtain virtual driving data; the virtual character is controlled based on the virtual drive data.
The control mode 3 is a semi-automatic control mode, if the default mode of the target live broadcasting room of the target live broadcasting is a device control mode and the action control mode is started, the virtual roles can be switched and controlled between the device control mode and the action control mode based on the requirement of the target live broadcasting in the live broadcasting, so that the control modes of the virtual roles in a live broadcasting scene are enriched, namely, part of the virtual roles can be driven by dynamic capture data without controlling the operation of the device; the anchor client interface comprises a mode switching button, and the action control mode can be started or closed by touching the mode switching button.
Specifically, if the current target anchor operates the virtual character based on the control device, under the condition that the target anchor stops operating, the dynamic capturing data of the target anchor can be automatically collected for controlling the virtual character; if the virtual character is controlled based on the action control mode, the target anchor can directly operate the virtual character based on the control device, i.e. can be directly switched from the action control mode to the device control mode; further, the virtual character can be controlled by adopting different control modes according to different virtual character parts, for example, the face of the virtual character is controlled to move by adopting a face capturing control mode, and the upper body of the virtual character is controlled to move by adopting a device control mode, and the like.
According to the method and the device, under the condition that the initial virtual picture generated based on the equipment control mode is displayed in the target live broadcasting room, the target virtual picture can be displayed based on the control model switching request, so that the control modes of the virtual roles are enriched, and switching of different control modes of the virtual roles is achieved; the virtual character driving data is acquired according to the character part type information, so that the control modes of different parts of the virtual character are increased, and the control modes of the virtual character are further enriched.
Fig. 2a shows a flowchart of a live broadcast picture display method according to an embodiment of the present application, which specifically includes the following steps:
step 202: and displaying an initial virtual picture generated by the target virtual character based on the first control mode in the target live broadcasting room.
The target live broadcasting room is a live broadcasting room created by a live broadcasting platform based on a live broadcasting request of a target host; the target virtual roles refer to virtual roles corresponding to target anchor which can be displayed in a target live broadcasting room; the first control mode refers to one of virtual character control modes which can be adopted by a target live broadcasting room, and in practical application, the first control mode can be a device control mode, an action control mode and a semi-automatic control mode; the device control mode refers to a mode for controlling the virtual character based on device control data triggered by the target anchor, and under the control mode, the target anchor can operate the target virtual character based on the control device; the control device may be a physical device such as a mouse, a keyboard, a microphone, or a virtual device such as a virtual rocker and a virtual button, and the application is not specifically limited; the action control mode is a mode for controlling the virtual character based on the action capturing data of the target anchor; the semi-automatic control mode is a mode for controlling the virtual character based on dynamic capture data and/or equipment control data corresponding to a target anchor; the initial virtual screen refers to a screen generated by the target anchor by controlling the virtual character.
Specifically, a target virtual role corresponding to a target anchor in a target living room is displayed in the target living room, and the target virtual role can be controlled based on a first control mode to generate an initial virtual picture.
Step 204: and receiving a control mode switching request generated based on a preset triggering condition, wherein the control mode switching request comprises switching information.
The preset triggering condition refers to a condition capable of triggering the switching of the control mode of the target live broadcasting room, and specifically, the preset triggering condition refers to a main broadcasting triggering condition, a control equipment triggering condition or a role part triggering condition; the anchor trigger condition refers to a condition that a target anchor in a target living room can generate a mode switching request by triggering a mode switching control of the target living room; the control equipment triggering condition refers to a control mode corresponding to the virtual character, and under the condition of a non-equipment control mode, the target anchor generates a condition of a mode switching request by triggering the control equipment; the character part trigger condition is a condition for triggering a mode switching request according to character part type information of the virtual character.
The control mode switching request refers to a request for switching a control mode of a target virtual character; the control mode switching request can be generated by a target anchor self-switching mode or can be generated under the condition that control information of other control modes is received; the switching information refers to information that can uniquely represent the second control mode, such as a mode identification, a mode name, and the like, of the switching information as the action control mode.
Specifically, when the target live broadcasting room displays the initial virtual screen, a control mode switching request generated according to a preset trigger condition may be received, for example, a control mode switching request for switching the control mode from the device control mode to the action control mode is generated according to a condition that the anchor starts the action control mode of the target live broadcasting room.
Step 206: and switching the first control mode into a second control mode based on the switching information, wherein the first control mode and the second control mode comprise a device control mode or an action control mode or a semi-automatic control mode.
The second control mode refers to a virtual role control mode which can be adopted by the target live broadcasting room and is different from the first control mode, and the second control mode can be a device control mode, an action control mode and a semi-automatic control mode; for example, the first control mode is determined to be a device control mode, the second control mode is determined to be an action control mode, and so on.
Specifically, the control mode switching request is analyzed, and switching information is determined; and determining a second control mode according to the switching information, and switching the control mode of the target live broadcasting room from the first control mode to the second control mode.
Step 208: and acquiring role data according to the second control mode, and mapping the role data to the target virtual role to obtain a target virtual picture.
The character data is data for driving a target virtual character, for example, in a device control mode, character control information of the target virtual character is used for driving the target virtual character so as to generate a corresponding virtual picture, and in an action control mode, character driving information of the target virtual character is used for driving the target virtual character so as to generate the corresponding virtual picture; the target virtual picture refers to a virtual picture obtained based on the character data and the target virtual character.
Specifically, the role data corresponding to the mode is obtained based on the second control mode, and the role data is mapped to the target virtual role, so that a target virtual picture in the second control mode is obtained.
Step 210: and displaying the target virtual picture in the target live broadcasting room.
Specifically, after a target virtual picture corresponding to a preset virtual picture or virtual character driving data is obtained, the preset virtual picture or the target virtual picture is directly displayed; in practical application, the control modes corresponding to different role positions may be different, so if the preset virtual picture and the target virtual picture are received, fusion display is required for the preset virtual picture or the target virtual picture.
Further, when the first control mode is a device control mode and the second control mode is an operation control mode, the control mode is switched, and the step of generating the corresponding virtual screen is as follows:
displaying an initial virtual picture generated by a target virtual character based on a device control mode in a target live broadcasting room;
receiving a control mode switching request, wherein the control mode switching request carries character part type information;
collecting virtual character driving data corresponding to a target anchor according to the character part type information;
mapping the virtual character driving data to the target virtual character according to the character part type information to obtain a target virtual picture;
and displaying the target virtual picture in the target live broadcasting room.
Specifically, the target anchor can set a control mode for the virtual character based on the requirement in the target living room; in the case where the control mode of the target live room is the device control mode by the target hosting device or the default control mode of the target live room is the device control mode, an initial virtual picture generated based on the device control mode may be presented in the target live room.
In a specific embodiment of the application, a lecturer anchor starts a virtual classroom live broadcast room and sets a control mode corresponding to a virtual teacher as an equipment control mode; the teaching anchor can control the movement of the virtual teacher in the virtual classroom live broadcasting room by moving the mouse, so that the teaching teacher can generate the virtual animation to be displayed by controlling the virtual teacher through the mouse in the virtual classroom live broadcasting room.
Further, before the initial virtual picture is displayed in the target live broadcasting room, the initial virtual picture needs to be generated based on the role information corresponding to the target virtual role.
Specifically, before the target live room displays the initial virtual picture generated by the target virtual character based on the device control mode, the method may further include:
receiving a role control request aiming at a target virtual role, and determining equipment control information and control part type information based on the role control request;
and mapping the equipment control information to the target virtual role based on the control part type information to obtain an initial virtual picture.
Wherein, the role control request refers to a request generated by a target anchor through controlling the virtual role by the control equipment; the role control request contains equipment control information and control part type information; the device control information refers to control information for a target virtual character, for example, the device control information is for a virtual teacher to raise the right hand; the control part type information refers to part types of virtual characters that the character control request needs to control, for example, upper body, lower body, head, and the like of the control virtual teacher.
Specifically, after the anchor client receives a role control request for a target virtual role, the role control request is analyzed, and equipment control information and control part type information are determined; which part of the target virtual character needs to be adjusted can be determined based on the control part type information, and the part adjustment form of the target virtual character can be determined based on the device control information.
In one embodiment of the present application, a host client used by a lecture host multicast receives a virtual character control request for a virtual teacher; analyzing the virtual character control request to obtain equipment control information, wherein the right arm is lifted by a virtual teacher, and the control part type information is obtained as an arm part; and mapping the equipment control information to the arm part of the virtual teacher to obtain an initial virtual picture.
An initial virtual picture is generated based on the device control information and the control part type information, so that a picture for controlling the virtual character based on the control device is displayed in a target anchor in a target living room.
Further, the character part type information refers to character part type information of the target virtual character, for example, head information, hand information, upper body information, and the like.
Specifically, the anchor client receives a control mode switching request, wherein the control mode switching request contains character part type information aiming at a target virtual character; in practical applications, the control mode switching request may be generated by the target anchor by triggering a mode button in the client interface, for example, the target anchor clicks a semi-automatic control mode start button of the anchor client interface to generate the control mode switching request; the control mode switching request may be switched by itself when control information of other modes is received, for example, when a target anchor receives anchor face driving data collected by a camera when the control device controls a virtual character, the control mode switching request may be generated based on the anchor face driving data, so that a current virtual character animation obtained by face capturing in a target living room is facilitated to replace the virtual character face animation generated based on the control information of the control device.
In one embodiment of the present application, a lecturer opens a semi-automatic control mode in a virtual classroom live room; when the virtual teacher is controlled by the control device, if the dynamic capture data of the teaching teacher is received, a control mode switching request is generated so that the current device control is adjusted to the motion control based on the control mode switching request.
The virtual animation is then adjusted based on the control mode switch request by receiving the control mode switch request.
Further, after receiving the control mode switching request, character part type information may be determined based on the control mode switching request, the character part type information being character part correspondence information of the target virtual character, for example, the character part type information being head information, hand information, or the like; data for driving the target virtual character in a new control mode may be collected based on the character part type information.
The virtual character driving data refers to data for driving a target virtual character to generate a virtual picture, for example, the virtual character driving data is pickup item data, blink data, or the like.
Specifically, the control mode switching request is analyzed and received, character part type information is obtained, namely, the positions of the target virtual character are determined to be subjected to control mode switching; and further acquiring virtual character driving data corresponding to the target anchor based on the character position type information, wherein the virtual character driving data is used for driving the character position of the target virtual character.
In an actual application, the method for collecting virtual character driving data corresponding to the target anchor according to the character part type information may include:
starting target acquisition equipment and acquiring a host image of a target host based on the target acquisition equipment;
and acquiring virtual character driving data in the anchor image based on the character part type information.
The target acquisition device is a device capable of acquiring an image of a target anchor, for example, the target acquisition device is an image acquisition device configured on an anchor client or an image acquisition device in transmission connection with the anchor client; the anchor image refers to an image of an anchor of a target, such as an anchor video, an anchor photo, etc., acquired by the target acquisition device.
Further, the method for acquiring virtual character driving data in the anchor image based on the character part type information may include:
analyzing the anchor image to obtain anchor action information;
and extracting virtual character driving data from the anchor action information according to the character part type information.
The anchor action information refers to action information of a target anchor analyzed in an anchor image, for example, action information of anchor hands, fist making and the like in anchor video.
Specifically, acquiring a main broadcasting image acquired by target acquisition equipment; inputting the anchor image into the action information acquisition model to acquire anchor action information output by the action information acquisition model; determining that a target virtual character is at a target character part based on character part type information, and collecting anchor action information corresponding to the target character part in anchor action information as virtual character driving data.
In a specific embodiment of the application, an image acquisition device connected with a data transmission of a host client is started in response to a control mode switching request; acquiring a video of a target anchor acquired by image acquisition equipment; extracting anchor action information in anchor video, and determining a target role position of a virtual teacher according to the role position type information; and screening the anchor action information corresponding to the target role part from the anchor action information as virtual role driving data.
In an actual application, before the initial virtual picture generated by the target virtual character based on the first control mode is displayed in the target live broadcasting room, the method further comprises the following steps:
detecting the corresponding shooting equipment of the target live broadcasting room and the corresponding starting state of the shooting equipment;
In the case where it is determined that the target image capturing apparatus is in the on state, it is determined that the first control mode is an action control mode.
That is, whether the motion control mode can be started or not can be determined by detecting whether the anchor client is connected or the image pickup apparatus is started; as shown in fig. 2b, fig. 2b is a schematic flow chart of the detection imaging apparatus according to an embodiment of the present application, and a specific detection manner includes steps s1 to s7:
step s1: the target anchor enters the target living room.
Step s2: and loading virtual scene information and virtual role information of the target live broadcasting room.
Step s3: detecting whether available camera equipment exists at a host client where a target host is located or not and the number corresponding to the camera equipment; step s4 is performed if there are a plurality of image pickup apparatuses, and step s5 is performed if there is only one image pickup apparatus; if there is no usable image pickup apparatus, step s6 is performed.
Step s4: a default image capturing apparatus is selected among the plurality of image capturing apparatuses, and step s5 is continued.
Step s5: the target living room automatically starts an action control mode.
Step s6: the action control mode is in a closed state, and the failure of opening the action control mode is prompted.
Step s7: and in the action control mode, the virtual roles are controlled based on virtual role driving data acquired by the default camera, so that the target anchor controls the target virtual roles to interact in the virtual scene.
Collecting a host image of a target host by target collecting equipment so as to control the virtual character based on the host image; virtual character driving data is determined in the anchor image based on the character part type information so that the target virtual character is driven based on the virtual character driving data later.
Further, after the virtual character driving data is determined, the character part of the target virtual character may be driven according to the virtual character driving data, thereby obtaining the target virtual picture.
Wherein, the target virtual picture refers to the virtual picture obtained based on the virtual character data and the target virtual character.
Specifically, after the virtual character driving data corresponding to the character part type information is obtained, the virtual character driving data can be mapped to the virtual character part corresponding to the character part type information based on the character part type information, that is, the virtual character driving data is mapped to the target virtual character, so that the target virtual picture for performing virtual character control based on the virtual character driving data is obtained.
In an actual application, the method for mapping the virtual character driving data to the target virtual character according to the character part type information to obtain the target virtual picture may include:
determining a target role part of the target virtual role according to the role part type information;
determining character part attribute information corresponding to the target character part;
and adjusting the character part attribute information based on the virtual character driving data to obtain a target virtual picture.
The target role part refers to a role part determined in the target virtual role according to the role part type information; the character part attribute information refers to attribute information corresponding to each target character part, for example, the attribute information corresponding to the arm part is side-lifting, downward 90-degree bending, or the like.
Specifically, a target character part, e.g., a head, a hand, etc., is determined in the target virtual character according to character part type information in the control mode switching request; acquiring a role database corresponding to a target role part, and screening role part attribute information corresponding to the target role part in the role database, wherein the role part attribute information comprises role attribute values such as a part height value, a part angle value and the like; and determining a character driving value contained in the virtual character driving data, and replacing the character attribute value based on the character driving value to obtain a target virtual picture.
In a specific embodiment of the present application, according to the character part type information "hand", "head", determining a character hand and a character head in a virtual part corresponding to a virtual teacher; collecting hand attribute values corresponding to the hands of the roles and head attribute values corresponding to the heads of the roles in a database corresponding to the virtual teacher; and replacing the hand attribute value according to the driving hand attribute value in the virtual character data, and replacing the head value according to the driving head attribute value to obtain the target virtual picture.
In practical application, each virtual character can be represented by a corresponding character state in the virtual scene, such as a dancing state, a walking state, a picking-up object state and the like; under the condition of different character states, different character parts can be controlled in different control modes, so that the stability of the virtual picture is ensured.
Specifically, before the virtual character driving data corresponding to the target anchor is collected according to the character part type information, the method may further include:
acquiring a current role state of the target virtual role;
and determining static part information and dynamic part information in the character part type information based on the current character state.
The current character state refers to a character state of the target virtual character in the target live broadcasting room, for example, the current character state is a dancing state, a flying state and the like; the character part type information can be divided into static part information and dynamic part information based on the current character state; the static part information refers to a role part of a target virtual role which can be in a static state under the current role state; the dynamic part information refers to a role part of a target virtual role which can be in a dynamic state under the current role state; for example, the lower body part may be a static part, the upper body part may be a dynamic part, the face may be a static part, the upper body may be a dynamic part, and the like in a character state in which the virtual character is dancing.
Specifically, the current role state of the target virtual role in the target live broadcasting room is obtained, wherein the current role state can be set by a target host or by a default of the target live broadcasting room; in practical application, if the virtual character is in a dynamic state, when the collected action data or face data of the anchor is mapped to the virtual character, the stability of the virtual picture is poor, and the viewing experience of the user is affected, so that the character part type information can be classified according to the current character state information, and the static part information and the dynamic part information are determined.
In order to solve the problem of poor stability of the virtual picture, the application respectively processes the part corresponding to the static part information and the part corresponding to the dynamic part information; the method comprises the following steps: the parts corresponding to the static part information can be driven by using action data or facial data of a host, and the parts corresponding to the dynamic part information are displayed based on a preset virtual picture.
Specifically, in the case that the character part information includes static part information, the method for collecting virtual character driving data corresponding to the target anchor according to the character part type information may include:
determining the static part of the target virtual character according to the static part information;
and acquiring virtual character driving data corresponding to the target anchor based on the static part.
The static part refers to a part where the target virtual character can be in a static state in the current character state.
In practical application, the character part type information may include only static part information, only dynamic part information, or both static part information and dynamic part information; for example, the current state of the virtual character is a standing state, only static part information may be included in the character part type information, and the current state of the virtual character is a dancing state, only dynamic part information may be included in the character part type information.
In a specific embodiment of the present application, the current character state of the virtual teacher is obtained as a standing state, and the target character part type information includes head part information and upper body part information, and then it is determined that the head part information and the upper body part information are both static part information; further, the target anchor corresponding to the virtual teacher, namely the face capturing data and the dynamic capturing data of the teaching teacher, is collected according to the static position information, and virtual character driving data is determined based on the face capturing data and the dynamic capturing data.
Specifically, in the case that the character part information includes dynamic part information, the method for acquiring virtual character driving data corresponding to the target anchor according to the character part type information may include:
determining the dynamic part of the target virtual character according to the dynamic part information;
and acquiring a preset virtual character animation corresponding to the target anchor based on the dynamic part.
The dynamic part refers to a part of the target virtual character which is in a dynamic state in the current character state; the preset virtual character animation refers to virtual character dynamics corresponding to the dynamic part, and is virtual character animation generated for the dynamic part in advance; when the preset virtual character animation is displayed, the target anchor is not required to control.
In a specific embodiment of the present application, the current character state of the virtual teacher is obtained as a dancing state, and the target character part type information includes upper body part information and lower body part information; determining that the upper body part information and the lower body part information are dynamic part information according to the dancing state; further, a preset virtual character animation corresponding to each piece of dynamic part information is obtained.
In a preferred embodiment of the present application, it may be determined, based on a preset character status list, that a preset virtual character animation and/or virtual character driving data corresponding to character part type information needs to be acquired; as shown in table 1 below:
TABLE 1
Figure BDA0003978696080000091
Figure BDA0003978696080000101
The character part type information corresponding to the virtual character comprises face information, head information, upper body information and lower body information; when the virtual character is in a standing state, determining face information, head information and upper body information as static position information, namely, dynamic capture data corresponding to an acquisition target anchor can be mapped to the virtual character, and determining lower body information as dynamic position information, namely, a preset virtual picture corresponding to the lower body information can be acquired;
determining face information and head information as static part information and upper body information and lower body information as dynamic part information when the virtual character is in a walking state;
Determining face information, head information and upper body information as static part information and lower body information as dynamic part information when the virtual character is in a sitting state;
when the virtual character is in the object picking state, determining that the face information is static position information, and determining that the head information, the upper body information and the lower body information are dynamic position information, wherein after the object picking state is finished, the object can be held for capturing, namely, the object holding standing state is achieved;
determining face information as static part information and lower body information, head information and upper body information as dynamic part information when the virtual character is in a hit state, such as displaying facial expression of a host player obtained by face capturing in the flying process;
determining face information as static part information and lower body information, head information and upper body information as dynamic part information when the virtual character is in a dancing state, such as displaying and displaying the facial expression of a host player obtained by face capturing in the dancing process;
in this embodiment, since the target anchor in the living room is normally in the sitting position to perform living broadcast, the motion of the lower body is a preset animation display, and if the lower body needs to be captured in practical application, the table 1 can be adjusted based on the requirement.
It should be noted that after the virtual character driving data corresponding to the static part information is obtained, the virtual character driving data is mapped to the target virtual character according to the static part information to obtain the target virtual picture; and after the preset virtual picture corresponding to the dynamic part information is obtained, the preset virtual picture can be directly displayed in the target live broadcasting room.
Further, in addition to determining whether an action control mode is used for a character part based on the current character state, part mode switching controls, such as a manual capture control, a head capture control and the like, can be configured on the anchor client interface; the target anchor may switch the motion control mode of the corresponding part on the basis of the part mode switching control, for example, switch on the hand motion capture, switch off the head motion capture, and simultaneously display the preset virtual animation corresponding to the head.
The above scheme describes the step of switching from the device control mode to the action control mode to control the virtual character, in practical application, the virtual character can also be controlled by switching from the action control mode to the device control mode, so that the control method of the virtual character is enriched, and the target anchor can conveniently control the virtual character.
Specifically, the method further comprises the following steps:
displaying a first virtual picture generated by a target virtual character based on an action control mode in the target live broadcasting room;
receiving a control mode switching request, wherein the control mode switching request carries character part type information and character control information;
mapping the role control information to the target virtual role based on the role part type information to obtain a second virtual picture;
and displaying the second virtual picture in the target live broadcasting room.
The first virtual picture is a picture generated by operating the virtual character through dynamic data capturing; the character control information refers to control information sent by the control device, for example, the character control information is that the target virtual character is moved forward by two steps; the second virtual screen is a screen generated by operating the virtual character based on the character part type information and the character control information.
In practical applications, the control mode switching request may be triggered by the target anchor or generated after the role information is received; namely, when the control mode is in the semiautomatic mode, the control mode can be switched from the action control mode to the equipment control mode through the received character information, or the equipment control mode can be switched to the action control mode through the received action capturing data, or different parts of the target virtual character can be controlled by the action control mode and the equipment control mode at the same time, so that the virtual character control mode is enriched, and the use experience of a user is improved.
After the anchor client generates the target virtual picture or the second virtual picture, the target virtual picture or the second virtual picture can be pushed to the server in a live stream mode; and the audience client side entering the target living broadcast room pulls the direct broadcast stream at the server, so that the watching of the content of the target living broadcast room is realized.
When the control mode of the target virtual character controlled by the target live broadcasting room is an action control mode, the control can be further performed by switching the action control mode to a semiautomatic control mode, specifically, when the first control mode is the action control mode and the second control mode is the semiautomatic control mode, an initial virtual picture generated by the target virtual character based on the first control mode is displayed in the target live broadcasting room, and the method comprises the following steps:
displaying a first virtual picture generated by a target virtual character based on an action control mode in the target live broadcasting room;
receiving a control mode switching request, wherein the control mode switching request carries target role part type information and target role control information;
determining other role part type information corresponding to the target virtual role based on the target role part type information;
Collecting target virtual character driving data corresponding to a target anchor according to the other character part type information;
mapping the role control information to a target role part of the target virtual role based on the target role part type information, and mapping the target virtual role driving data to other role parts of the target virtual role based on the other role part type information, so as to obtain a second virtual picture;
and displaying the second virtual picture in the target live broadcasting room.
Specifically, when the control mode is in the action control mode, analyzing the received control mode switching request, and determining target character part type information, wherein the target character part type information refers to type information of a target character part of a target virtual character, for example, if the head of the target virtual character is determined to be the target part, the type information corresponding to the head is determined to be the target character part type information; determining a target role part corresponding to the target role part type information as a target role part to be controlled by the control equipment; and determining other character part type information of the target virtual character according to the target character part type information, wherein the other character part type information refers to type information corresponding to other character parts of the target virtual character except for the target character part, for example, if the target character part of the target virtual character is determined to be a head part, the other character parts are an upper body and a lower body, namely, the type information corresponding to the upper body and the lower body is the other character part type information.
After the target role part and other role parts are determined, role control information is mapped to the target role part of the target virtual role according to the target role part type information, target virtual role driving data is acquired according to the other role part type information, and the target virtual role driving data is mapped to the other role parts of the target virtual role to generate a second virtual picture, so that the target virtual role can be controlled simultaneously by the action control mode and the equipment control mode in the semi-automatic control mode.
In a specific embodiment of the application, live broadcasting pictures for controlling the virtual teacher based on the action control mode are displayed in a lecture live broadcasting room; receiving a control mode switching request aiming at the teaching live broadcasting room, wherein the control mode switching request comprises upper body role type information and corresponding role control information; determining other head type information of the virtual teacher according to the upper body role part type information; collecting target virtual character driving data of a target anchor according to the head type information, namely collecting surface capturing data of the target anchor; the character control data is mapped to the upper body part of the virtual teacher, the face capture data is mapped to the face part of the virtual teacher, and the lower body part of the virtual teacher is displayed by a preset virtual animation, so that the control mode of the virtual teacher is switched from the motion-only control mode to the semi-automatic control mode.
According to the live broadcast picture display method, initial virtual pictures generated by the target virtual roles based on the first control mode are displayed in the target live broadcast room; receiving a control mode switching request generated based on a preset triggering condition, wherein the control mode switching request comprises switching information; switching the first control mode into a second control mode based on the switching information, wherein the first control mode and the second control mode comprise a device control mode or an action control mode or a semi-automatic control mode; acquiring role data according to the second control mode, and mapping the role data to the target virtual role to obtain a target virtual picture; and displaying the target virtual picture in the target live broadcasting room.
According to the method and the device for displaying the target virtual pictures in the live broadcasting room, under the condition that the initial virtual pictures generated based on the first control mode are displayed in the target live broadcasting room, the control mode can be switched to the second control mode based on the control mode switching request, and the target virtual pictures in the second control mode are displayed, so that the control modes of virtual roles are enriched.
The following describes, with reference to fig. 3, an example of application of the live broadcast picture display method provided in the present application to a game scene, where the live broadcast picture display method is further described. Fig. 3 shows a process flow chart of a live broadcast picture display method applied to a game scene according to an embodiment of the present application, which specifically includes the following steps:
Step 302: and determining a target virtual role corresponding to the target anchor in the target game live broadcasting room.
Step 304: and receiving a role control request for the target virtual role, and determining device control information and control part type information based on the role control request.
Step 306: and mapping the equipment control information to the target virtual role based on the control part type information to obtain an initial virtual picture.
Step 308: and displaying an initial virtual picture of the target virtual character in the target live broadcasting room.
Step 310: and receiving a first control mode switching request, wherein the first control mode switching request carries character part type information.
Step 312: and responding to the first control mode switching request, starting the target acquisition equipment, and acquiring the anchor image of the target anchor based on the target acquisition equipment.
Step 314: and acquiring the anchor action information in the anchor image, and extracting virtual character driving data from the anchor action information according to the character part type information.
Step 316: and acquiring the current role state of the target virtual role, and determining static part information and dynamic part information in the role part type information based on the current role state.
Step 318: and determining virtual character driving data corresponding to the target anchor according to the static position information, and determining preset virtual character animation corresponding to the target anchor according to the dynamic position information.
Step 320: and determining the static part of the target virtual character according to the static part information, and determining the part attribute information corresponding to the static part.
Step 322: and adjusting the attribute information of the character part based on the virtual character driving data to obtain a virtual picture.
Step 324: and fusing the virtual picture with a preset virtual character animation to obtain a target virtual picture, and displaying the target virtual picture in a target game live broadcasting room.
Step 326: and receiving a second control mode switching request, wherein the second control mode switching request carries the character part type information and the character control information.
Step 328: and mapping the role control information to the target virtual role based on the role part type information, and obtaining a second virtual picture.
Step 330: a second virtual picture is presented in the target game live room.
According to the live broadcast picture display method, initial virtual pictures generated by the target virtual roles based on the equipment control mode are displayed in the target live broadcast room; receiving a control mode switching request, wherein the control mode switching request carries character part type information; collecting virtual character driving data corresponding to a target anchor according to the character part type information; mapping the virtual character driving data to the target virtual character according to the character part type information to obtain a target virtual picture; and displaying the target virtual picture in the target live broadcasting room.
According to the method and the device, under the condition that the initial virtual picture generated based on the equipment control mode is displayed in the target live broadcasting room, the target virtual picture can be displayed based on the control model switching request, so that the control modes of the virtual roles are enriched, and switching of different control modes of the virtual roles is achieved; the virtual character driving data is acquired according to the character part type information, so that the control modes of different parts of the virtual character are increased, and the control modes of the virtual character are further enriched.
Corresponding to the method embodiment, the present application further provides a live-broadcast picture display device embodiment, and fig. 4 shows a schematic structural diagram of a live-broadcast picture display device according to an embodiment of the present application. As shown in fig. 4, the apparatus includes:
a first presentation module 402 configured to present, at a target living room, an initial virtual picture generated by a target virtual character based on a first control mode;
a receiving module 404, configured to receive a control mode switching request generated based on a preset trigger condition, where the control mode switching request includes switching information;
an acquisition module 406 configured to switch the first control mode to a second control mode based on the switching information, wherein the first control mode and the second control mode include a device control mode or an action control mode or a semiautomatic control mode;
A mapping module 408 configured to obtain character data according to the second control mode, and map the character data to the target virtual character to obtain a target virtual picture;
a second presentation module 410 configured to present the target virtual picture in the target live room.
Optionally, the preset triggering condition includes: the method comprises the following steps of anchor trigger conditions, control equipment trigger conditions and role part trigger conditions.
Optionally, the apparatus further comprises an action control sub-module configured to:
displaying an initial virtual picture generated by a target virtual character based on a device control mode in a target live broadcasting room;
receiving a control mode switching request, wherein the control mode switching request carries character part type information;
collecting virtual character driving data corresponding to a target anchor according to the character part type information;
mapping the virtual character driving data to the target virtual character according to the character part type information to obtain a target virtual picture;
and displaying the target virtual picture in the target live broadcasting room.
Optionally, the apparatus further comprises a device control sub-module configured to:
Displaying a first virtual picture generated by a target virtual character based on an action control mode in the target live broadcasting room;
receiving a control mode switching request, wherein the control mode switching request carries character part type information and character control information;
mapping the role control information to the target virtual role based on the role part type information to obtain a second virtual picture;
and displaying the second virtual picture in the target live broadcasting room.
Optionally, the apparatus further comprises a semi-automatic control sub-module configured to:
displaying a first virtual picture generated by a target virtual character based on an action control mode in the target live broadcasting room;
receiving a control mode switching request, wherein the control mode switching request carries target role part type information and target role control information;
determining other role part type information corresponding to the target virtual role based on the target role part type information;
collecting target virtual character driving data corresponding to a target anchor according to the other character part type information;
mapping the role control information to a target role part of the target virtual role based on the target role part type information, and mapping the target virtual role driving data to other role parts of the target virtual role based on the other role part type information, so as to obtain a second virtual picture;
And displaying the second virtual picture in the target live broadcasting room.
Optionally, the apparatus further comprises a mapping sub-module configured to:
receiving a role control request aiming at a target virtual role, and determining equipment control information and control part type information based on the role control request;
and mapping the equipment control information to the target virtual role based on the control part type information to obtain an initial virtual picture.
Optionally, the acquisition module 406 is further configured to:
starting target acquisition equipment and acquiring a host image of a target host based on the target acquisition equipment;
and acquiring virtual character driving data in the anchor image based on the character part type information.
Optionally, the acquisition module 406 is further configured to:
analyzing the anchor image to obtain anchor action information;
and extracting virtual character driving data from the anchor action information according to the character part type information.
Optionally, the mapping module 408 is further configured to:
determining a target role part of the target virtual role according to the role part type information;
Determining character part attribute information corresponding to the target character part;
and adjusting the character part attribute information based on the virtual character driving data to obtain a target virtual picture.
Optionally, the apparatus further comprises an acquisition sub-module configured to:
acquiring a current role state of the target virtual role;
and determining static part information and dynamic part information in the character part type information based on the current character state.
Optionally, the acquisition module 406 is further configured to:
determining the static part of the target virtual character according to the static part information;
and acquiring virtual character driving data corresponding to the target anchor based on the static part.
Optionally, the acquisition module 406 is further configured to:
determining the dynamic part of the target virtual character according to the dynamic part information;
and acquiring a preset virtual character animation corresponding to the target anchor based on the dynamic part.
Optionally, the apparatus further comprises a detection module configured to:
detecting the corresponding shooting equipment of the target live broadcasting room and the corresponding starting state of the shooting equipment;
in the case where it is determined that the target image capturing apparatus is in the on state, it is determined that the first control mode is an action control mode.
According to the live broadcast picture display method, initial virtual pictures generated by the target virtual roles based on the first control mode are displayed in the target live broadcast room; receiving a control mode switching request generated based on a preset triggering condition, wherein the control mode switching request comprises switching information; switching the first control mode into a second control mode based on the switching information, wherein the first control mode and the second control mode comprise a device control mode or an action control mode or a semi-automatic control mode; acquiring role data according to the second control mode, and mapping the role data to the target virtual role to obtain a target virtual picture; and displaying the target virtual picture in the target live broadcasting room.
According to the method and the device for displaying the target virtual pictures in the live broadcasting room, under the condition that the initial virtual pictures generated based on the first control mode are displayed in the target live broadcasting room, the control mode can be switched to the second control mode based on the control mode switching request, and the target virtual pictures in the second control mode are displayed, so that the control modes of virtual roles are enriched.
The above is a schematic solution of a live broadcast picture display device of this embodiment. It should be noted that, the technical solution of the live broadcast picture display device and the technical solution of the live broadcast picture display method belong to the same concept, and details of the technical solution of the live broadcast picture display device which are not described in detail can be referred to the description of the technical solution of the live broadcast picture display method.
Fig. 5 illustrates a block diagram of a computing device 500, provided in accordance with an embodiment of the present application. The components of the computing device 500 include, but are not limited to, a memory 510 and a processor 520. Processor 520 is coupled to memory 510 via bus 530 and database 550 is used to hold data.
Computing device 500 also includes access device 540, access device 540 enabling computing device 500 to communicate via one or more networks 560. Examples of such networks include a public switched telephone Network (PSTN, publicSwitchedTelephone Network), a local area Network (LAN, localAreaNetwork), a wide area Network (WAN, wideAreaNetwork), a personal area Network (PAN, personalAreaNetwork), or a combination of communication networks such as the internet. The access device 540 may include one or more of any type of network interface, wired or wireless, such as a Network Interface Card (NIC), such as an IEEE802.11 wireless local area network (WLAN, wirelessLocalAreaNetwork) wireless interface, a worldwide interoperability for microwave access (Wi-MAX, worldwideInteroperabilityforMicrowaveAccess) interface, an ethernet interface, a universal serial bus (USB, universalSerialBus) interface, a cellular network interface, a bluetooth interface, a near-field communication (NFC, nearFieldCommunication) interface, and so forth.
In one embodiment of the present application, the above-described components of computing device 500, as well as other components not shown in FIG. 5, may also be connected to each other, such as by a bus. It should be understood that the block diagram of the computing device illustrated in FIG. 5 is for exemplary purposes only and is not intended to limit the scope of the present application. Those skilled in the art may add or replace other components as desired.
Computing device 500 may be any type of stationary or mobile computing device, including a mobile computer or mobile computing device (e.g., tablet, personal digital assistant, laptop, notebook, netbook, etc.), mobile phone (e.g., smart phone), wearable computing device (e.g., smart watch, smart glasses, etc.), or other type of mobile device, or a stationary computing device such as a desktop computer or personal computer (PC, personalComputer). Computing device 500 may also be a mobile or stationary server.
Wherein, the processor 520 implements the steps of the live view display method when executing the computer instructions.
The foregoing is a schematic illustration of a computing device of this embodiment. It should be noted that, the technical solution of the computing device and the technical solution of the live broadcast picture display method belong to the same concept, and details of the technical solution of the computing device, which are not described in detail, can be referred to the description of the technical solution of the live broadcast picture display method.
An embodiment of the present application further provides a computer readable storage medium storing computer instructions that, when executed by a processor, implement the steps of the live view presentation method as described above.
The above is an exemplary version of a computer-readable storage medium of the present embodiment. It should be noted that, the technical solution of the storage medium and the technical solution of the live broadcast picture display method belong to the same concept, and details of the technical solution of the storage medium which are not described in detail can be referred to the description of the technical solution of the live broadcast picture display method.
The foregoing describes specific embodiments of the present application. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
The computer instructions include computer program code that may be in source code form, object code form, executable file or some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer memory, a Read-only memory (ROM), a random access memory (RAM, randomAccessMemory), an electrical carrier signal, a telecommunication signal, a software distribution medium, and so forth. It should be noted that the computer readable medium contains content that can be appropriately scaled according to the requirements of jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is subject to legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunication signals.
It should be noted that, for the sake of simplicity of description, the foregoing method embodiments are all expressed as a series of combinations of actions, but it should be understood by those skilled in the art that the present application is not limited by the order of actions described, as some steps may be performed in other order or simultaneously in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily all necessary for the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to the related descriptions of other embodiments.
The above-disclosed preferred embodiments of the present application are provided only as an aid to the elucidation of the present application. Alternative embodiments are not intended to be exhaustive or to limit the invention to the precise form disclosed. Obviously, many modifications and variations are possible in light of the teaching of this application. The embodiments were chosen and described in order to best explain the principles of the invention and the practical application, to thereby enable others skilled in the art to best understand and utilize the invention. This application is to be limited only by the claims and the full scope and equivalents thereof.

Claims (16)

1. The live broadcast picture display method is characterized by comprising the following steps of:
displaying an initial virtual picture generated by a target virtual character based on a first control mode in a target live broadcasting room;
receiving a control mode switching request generated based on a preset triggering condition, wherein the control mode switching request comprises switching information;
switching the first control mode into a second control mode based on the switching information, wherein the first control mode and the second control mode comprise a device control mode or an action control mode or a semi-automatic control mode;
acquiring role data according to the second control mode, and mapping the role data to the target virtual role to obtain a target virtual picture;
and displaying the target virtual picture in the target live broadcasting room.
2. The method of claim 1, wherein the preset trigger condition comprises: the method comprises the following steps of anchor trigger conditions, control equipment trigger conditions and role part trigger conditions.
3. The method of claim 1, wherein, in a case where the first control mode is a device control mode and the second control mode is an action control mode, displaying an initial virtual picture generated by the target virtual character based on the first control mode in the target living room, comprises:
Displaying an initial virtual picture generated by a target virtual character based on a device control mode in a target live broadcasting room;
receiving a control mode switching request, wherein the control mode switching request carries character part type information;
collecting virtual character driving data corresponding to a target anchor according to the character part type information;
mapping the virtual character driving data to the target virtual character according to the character part type information to obtain a target virtual picture;
and displaying the target virtual picture in the target live broadcasting room.
4. The method of claim 1, wherein, in a case where the first control mode is an action control mode and the second control mode is a device control mode, displaying an initial virtual picture generated by the target virtual character based on the first control mode in the target living room, comprises:
displaying a first virtual picture generated by a target virtual character based on an action control mode in the target live broadcasting room;
receiving a control mode switching request, wherein the control mode switching request carries character part type information and character control information;
mapping the role control information to the target virtual role based on the role part type information to obtain a second virtual picture;
And displaying the second virtual picture in the target live broadcasting room.
5. The method of claim 1, wherein, in a case where the first control mode is an action control mode and the second control mode is a semi-automatic control mode, displaying an initial virtual picture generated by the target virtual character based on the first control mode at the target living room, comprises:
displaying a first virtual picture generated by a target virtual character based on an action control mode in the target live broadcasting room;
receiving a control mode switching request, wherein the control mode switching request carries target role part type information and target role control information;
determining other role part type information corresponding to the target virtual role based on the target role part type information;
collecting target virtual character driving data corresponding to a target anchor according to the other character part type information;
mapping the role control information to a target role part of the target virtual role based on the target role part type information, and mapping the target virtual role driving data to other role parts of the target virtual role based on the other role part type information, so as to obtain a second virtual picture;
And displaying the second virtual picture in the target live broadcasting room.
6. The method of claim 3, further comprising, prior to the target live room exhibiting the initial virtual picture generated by the target virtual character based on the device control mode:
receiving a role control request aiming at a target virtual role, and determining equipment control information and control part type information based on the role control request;
and mapping the equipment control information to the target virtual role based on the control part type information to obtain an initial virtual picture.
7. The method of claim 3, wherein collecting virtual character driving data corresponding to a target anchor according to the character part type information, comprising:
starting target acquisition equipment and acquiring a host image of a target host based on the target acquisition equipment;
and acquiring virtual character driving data in the anchor image based on the character part type information.
8. The method of claim 7, wherein acquiring virtual character drive data in the anchor image based on the character part type information comprises:
analyzing the anchor image to obtain anchor action information;
And extracting virtual character driving data from the anchor action information according to the character part type information.
9. The method of claim 3, wherein mapping the avatar driving data to the target avatar according to the character part type information, obtaining a target virtual picture, comprises:
determining a target role part of the target virtual role according to the role part type information;
determining character part attribute information corresponding to the target character part;
and adjusting the character part attribute information based on the virtual character driving data to obtain a target virtual picture.
10. The method of claim 3, further comprising, prior to collecting virtual character driving data corresponding to a target anchor according to the character part type information:
acquiring a current role state of the target virtual role;
and determining static part information and dynamic part information in the character part type information based on the current character state.
11. The method of claim 10, wherein collecting virtual character driving data corresponding to a target anchor according to the character part type information, comprises:
Determining the static part of the target virtual character according to the static part information;
and acquiring virtual character driving data corresponding to the target anchor based on the static part.
12. The method of claim 10, wherein collecting virtual character driving data corresponding to a target anchor according to the character part type information, comprises:
determining the dynamic part of the target virtual character according to the dynamic part information;
and acquiring a preset virtual character animation corresponding to the target anchor based on the dynamic part.
13. The method of claim 1, further comprising, prior to the target live room exhibiting the initial virtual picture generated by the target virtual character based on the first control mode:
detecting the corresponding shooting equipment of the target live broadcasting room and the corresponding starting state of the shooting equipment;
in the case where it is determined that the target image capturing apparatus is in the on state, it is determined that the first control mode is an action control mode.
14. A live view display device, comprising:
the first display module is configured to display an initial virtual picture generated by the target virtual character based on the first control mode in the target live broadcasting room;
The receiving module is configured to receive a control mode switching request generated based on a preset triggering condition, wherein the control mode switching request comprises switching information;
a switching module configured to switch the first control mode to a second control mode based on the switching information, wherein the first control mode and the second control mode include a device control mode or an action control mode or a semiautomatic control mode;
the mapping module is configured to acquire character data according to the second control mode, map the character data to the target virtual character and acquire a target virtual picture;
and the second display module is configured to display the target virtual picture in the target live broadcasting room.
15. A computing device comprising a memory, a processor, and computer instructions stored on the memory and executable on the processor, wherein the processor, when executing the computer instructions, performs the steps of the method of any one of claims 1-13.
16. A computer readable storage medium storing computer instructions which, when executed by a processor, implement the steps of the method of any one of claims 1-13.
CN202211538639.6A 2022-12-02 2022-12-02 Live broadcast picture display method and device Pending CN116016963A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211538639.6A CN116016963A (en) 2022-12-02 2022-12-02 Live broadcast picture display method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211538639.6A CN116016963A (en) 2022-12-02 2022-12-02 Live broadcast picture display method and device

Publications (1)

Publication Number Publication Date
CN116016963A true CN116016963A (en) 2023-04-25

Family

ID=86028777

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211538639.6A Pending CN116016963A (en) 2022-12-02 2022-12-02 Live broadcast picture display method and device

Country Status (1)

Country Link
CN (1) CN116016963A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012212237A (en) * 2011-03-30 2012-11-01 Namco Bandai Games Inc Image generation system, server system, program, and information storage medium
CN105630374A (en) * 2015-12-17 2016-06-01 网易(杭州)网络有限公司 Virtual character control mode switching method and device
CN107533374A (en) * 2015-08-26 2018-01-02 谷歌有限责任公司 Switching at runtime and the merging on head, gesture and touch input in virtual reality
CN108211358A (en) * 2017-11-30 2018-06-29 腾讯科技(成都)有限公司 The display methods and device of information, storage medium, electronic device
CN110782533A (en) * 2019-10-29 2020-02-11 北京电影学院 System for controlling interaction of virtual roles in virtual rehearsal
CN113220116A (en) * 2015-10-20 2021-08-06 奇跃公司 System and method for changing user input mode of wearable device and wearable system
CN113327312A (en) * 2021-05-27 2021-08-31 百度在线网络技术(北京)有限公司 Virtual character driving method, device, equipment and storage medium
CN113938336A (en) * 2021-11-15 2022-01-14 网易(杭州)网络有限公司 Method, device and electronic device for conference control

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012212237A (en) * 2011-03-30 2012-11-01 Namco Bandai Games Inc Image generation system, server system, program, and information storage medium
CN107533374A (en) * 2015-08-26 2018-01-02 谷歌有限责任公司 Switching at runtime and the merging on head, gesture and touch input in virtual reality
CN113220116A (en) * 2015-10-20 2021-08-06 奇跃公司 System and method for changing user input mode of wearable device and wearable system
CN105630374A (en) * 2015-12-17 2016-06-01 网易(杭州)网络有限公司 Virtual character control mode switching method and device
CN108211358A (en) * 2017-11-30 2018-06-29 腾讯科技(成都)有限公司 The display methods and device of information, storage medium, electronic device
CN110782533A (en) * 2019-10-29 2020-02-11 北京电影学院 System for controlling interaction of virtual roles in virtual rehearsal
CN113327312A (en) * 2021-05-27 2021-08-31 百度在线网络技术(北京)有限公司 Virtual character driving method, device, equipment and storage medium
CN113938336A (en) * 2021-11-15 2022-01-14 网易(杭州)网络有限公司 Method, device and electronic device for conference control

Similar Documents

Publication Publication Date Title
CN111556278B (en) Video processing method, video display device and storage medium
CN113596536B (en) Display device and information display method
JP4877762B2 (en) Facial expression guidance device, facial expression guidance method, and facial expression guidance system
CN110888532A (en) Man-machine interaction method and device, mobile terminal and computer readable storage medium
CN109432753A (en) Motion correction method, device, storage medium and electronic device
CN112905074B (en) Interactive interface display method, interactive interface generation method and device and electronic equipment
WO2022068479A1 (en) Image processing method and apparatus, and electronic device and computer-readable storage medium
JP2020039029A (en) Video distribution system, video distribution method, and video distribution program
CN110868554B (en) Method, device and equipment for changing faces in real time in live broadcast and storage medium
CN113507621A (en) Live broadcast method, device, system, computer equipment and storage medium
CN111643900A (en) Display picture control method and device, electronic equipment and storage medium
CN112073770B (en) Display device and video communication data processing method
CN113689530B (en) Method and device for driving digital person and electronic equipment
CN110162667A (en) Video generation method, device and storage medium
JP2023103335A (en) Computer program, server device, terminal device and display method
CN112261481A (en) Interactive video creating method, device and equipment and readable storage medium
KR20050082559A (en) Dance learning system, internet community service system and internet community service method using the same, dance learning method, and computer executable recording media on which programs implement said methods are recorded
CN112738420A (en) Special effect implementation method and device, electronic equipment and storage medium
CN108986803A (en) Scenery control method and device, electronic equipment, readable storage medium storing program for executing
JP2020202575A (en) Video distribution system, video distribution method, and video distribution program
CN115086594A (en) Virtual conference processing method, device, equipment and storage medium
KR101850860B1 (en) Motion capture and Image Process System and implementation Method of the Same
CN116016963A (en) Live broadcast picture display method and device
EP4496329A1 (en) Photographic processing method and apparatus based on virtual reality, and electronic device
CN114078280B (en) Motion capture method, device, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination