[go: up one dir, main page]

WO2024226818A1 - Optical guidance and tracking for medical imaging - Google Patents

Optical guidance and tracking for medical imaging Download PDF

Info

Publication number
WO2024226818A1
WO2024226818A1 PCT/US2024/026297 US2024026297W WO2024226818A1 WO 2024226818 A1 WO2024226818 A1 WO 2024226818A1 US 2024026297 W US2024026297 W US 2024026297W WO 2024226818 A1 WO2024226818 A1 WO 2024226818A1
Authority
WO
WIPO (PCT)
Prior art keywords
patient
camera
ogts
image
treatment
Prior art date
Application number
PCT/US2024/026297
Other languages
French (fr)
Inventor
Andries N. SCHREUDER
Augustin MANOLACHE
Mark ARTZ
Original Assignee
Leo Cancer Care, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Leo Cancer Care, Inc. filed Critical Leo Cancer Care, Inc.
Publication of WO2024226818A1 publication Critical patent/WO2024226818A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61NELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
    • A61N5/00Radiation therapy
    • A61N5/10X-ray therapy; Gamma-ray therapy; Particle-irradiation therapy
    • A61N5/1048Monitoring, verifying, controlling systems and methods
    • A61N5/1049Monitoring, verifying, controlling systems and methods for verifying the position of the patient with respect to the radiation beam
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2055Optical tracking systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/02Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computed tomography [CT]
    • A61B6/032Transmission computed tomography [CT]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61NELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
    • A61N5/00Radiation therapy
    • A61N5/10X-ray therapy; Gamma-ray therapy; Particle-irradiation therapy
    • A61N5/1048Monitoring, verifying, controlling systems and methods
    • A61N5/1049Monitoring, verifying, controlling systems and methods for verifying the position of the patient with respect to the radiation beam
    • A61N2005/1059Monitoring, verifying, controlling systems and methods for verifying the position of the patient with respect to the radiation beam using cameras imaging the patient

Definitions

  • medical imaging and radiation therapy procedures typically involve immobilizing a patient in an appropriate position so that the relevant area of the patient body to be imaged and/or treated is stable and static.
  • Comfortable immobilization of a patient can be provided by a patient positioning system that supports the patient body in a position for imaging and/or treatment.
  • Patient positioning systems are configurable apparatuses having a number of movable components that support and immobilize the patient torso, arms, legs, hands, feet, head, and neck in position for imaging and treatment. See, e.g., U.S. Pat.
  • monitoring the location of patient positioning components during movement and setup is important to avoid collisions with other objects and individuals who may be near the patient positioning system during configuration.
  • monitoring the location of the patient during imaging and/or treatment and detecting patient movements during imaging and/or treatment improves the accuracy of imaging and/or treatment, decreases exposure of healthy tissue to radiation, and can improve safety of patients (e.g., by detecting a patient fall or errant placement of healthy tissue in a position where it may be exposed to radiation).
  • one neglected aspect of patient positioning for radiation therapy is surface guidance (SG). SG technologies image and monitor the external surface of a patient to provide a preliminary alignment of the patient body before assessing the positions of internal organs.
  • the technology provides an orthogonal implementation of three cameras (e.g., an overhead camera and two peripheral cameras that are mutually orthogonal).
  • the technology provides for correcting the position of an object (e.g., a patient) position using the cameras, e.g., the overhead camera shows rotation errors about the vertical (Z) axis and shows X and Y translations; and the peripheral cameras show errors in the vertical direction.
  • the technology provides a first camera that directly faces the object (e.g., patient) such that transverse movements of the object are detected in the first camera view and longitudinal movements are detected by the other two cameras that are orthogonal to the first camera.
  • the technology provides an ability to obtain a four-degree correction from the three orthogonal cameras, e.g., adjustments in the X and Y directions and rotation around the Z axis are determined by aligning live views from the top camera with a reference image; and adjustments of vertical position are determined by aligning peripheral camera live views with reference images.
  • reference images are grouped into scenes that can be recalled as needed.
  • the technology provides an interface that allows a user to select a region of interest from individual camera views to provide precise information for the object in the selected region.
  • cameras only send data within a selected region of interest to the host computer, thereby reducing the amount of data ASTO-41250.601 that is transferred to the computer and providing fast frame rates.
  • the technology provides an interface through which a user may select a number of cameras that provide the best views of the object (e.g., patient) based on the orientation of the patient and locations of the cameras.
  • the technology provides an interface through which a user may draw reference lines on a camera image that do not move with the image (e.g., a vertical and/or a horizontal line).
  • the lines may be adjusted to intersect with a reference point in the treatment room, e.g., the isocenter.
  • the lines may serve the same purpose as laser lines in the room. Moving the object until a point or marker on the object (which is visible in the live image) intersects with the fixed reference lines on two cameras allows for aligning the object with the reference point in the room.
  • the technology provides for offsetting a reference image according to a position correction (e.g., correction vector) to be applied to the patient and/or patient support. The mismatch between the offset reference image and the live view will be minimized and/or disappear if the correction vector is applied correctly, thus providing a technology to verify the correct application of the correction vector.
  • a position correction e.g., correction vector
  • the technology provides an optical guidance and tracking system (OGTS).
  • the OGTS comprises an overhead camera; and a first peripheral camera, wherein a field of view of the overhead camera is orthogonal to a field of view of the first peripheral camera.
  • the OGTS further comprises a second peripheral camera, wherein a field of view of the second peripheral camera is orthogonal to the field of view of the overhead camera; and the field of view of the second peripheral camera is orthogonal to the field of view of the first peripheral camera.
  • the OGTS further comprises a third peripheral camera, wherein the fields of view of any two of the peripheral cameras and the overhead camera are all mutually orthogonal.
  • the OGTS further comprises a fourth peripheral camera, wherein the fields of view of any two of the peripheral cameras and the overhead camera are all mutually orthogonal.
  • the OGTS further comprises a patient support.
  • the patient support rotates around a vertical (Z) axis.
  • the field of view of the overhead camera is aligned with the vertical (Z) axis.
  • the OGTS further comprises a radiation therapy apparatus.
  • the radiation therapy apparatus comprises a static source.
  • the OGTS further comprises a computerized tomography ASTO-41250.601 (CT) scanner.
  • CT computerized tomography ASTO-41250.601
  • the overhead camera provides a view through a bore of a scanner ring of the CT scanner.
  • the overhead camera comprises a color sensor array and the first peripheral camera comprises a color sensor array.
  • the OGTS further comprises a processor and a non- transitory computer-readable medium.
  • the non-transitory computer-readable medium comprises a program and the processor executes the program to acquire color images from the overhead camera and to acquire images from the peripheral camera.
  • the OGTS further comprises a display.
  • the non-transitory computer-readable medium comprises a program and the processor executes the program to superimpose a live video over a reference image on the display.
  • the non-transitory computer-readable medium comprises a program and the processor executes the program to provide a graphical user interface on the display.
  • a user interacts with the graphical user interface to identify a region of interest of a camera view.
  • a user interacts with the graphical user interface to align the live video and the reference image on the display.
  • the processor calculates an adjustment in real space to position a patient properly for a treatment.
  • the OGTS further comprises a database comprising a saved scene.
  • the saved scene comprises an image, information identifying a camera that provided the image, and a region of interest for the image.
  • the first peripheral camera is located on a major Y axis of the OGTS. In some embodiments, the first peripheral camera is located on a major Y axis of the OGTS and the second peripheral camera is located on a major X axis of the OGTS. In some embodiments, the overhead camera is located on a major Z axis of the OGTS.
  • methods comprise obtaining a first reference image of a patient support and/or a patient; superimposing a first live image of a patient support and/or a patient over the reference image; aligning the first live image and the first reference image to determine a displacement; and moving the patient support and/or the patient according to the displacement.
  • the first reference image was provided by a first camera and the first live image is provided by the first camera.
  • methods further comprise obtaining a second reference image of the patient support and/or the patient; and superimposing a second live image of the patient support and/or the patient over the second reference image.
  • the ASTO-41250.601 second reference image was provided by a second camera and the second live image is provided by the second camera; and wherein a field of view of the second camera is orthogonal to a field of view of the first camera.
  • the first camera is an overhead camera. In some embodiments, the first camera is a peripheral camera.
  • aligning the first live image and the first reference image comprises a user interacting with a graphical user interface to align the first live image and the first reference image. In some embodiments, aligning the first live image and the first reference image comprises using an image alignment software to align the first live image and the first reference image.
  • a saved scene comprises the first reference image. In some embodiments, the saved scene comprises the first reference image, information identifying a camera that provided the first reference image, and a region of interest for the first reference image. In some embodiments, the displacement comprises a translation in the X, Y, and/or Z directions and/or a rotation around the X, Y, and/or Z axes.
  • methods further comprise determining a relationship between the pixel size of the first camera to a distance in real space. In some embodiments, methods further comprise contacting the patient with radiation. In some embodiments, methods further comprise imaging the patient using computerized tomography. Further embodiments of methods comprise obtaining a first reference image of a patient support and/or a patient; superimposing a first live image of a patient support and/or a patient over the reference image; displacing the reference image according to a correction vector; applying the correction vector to the patient support and/or the patient; and verifying correct application of the correction vector using alignment of the first live image and the first reference image.
  • application of the correction vector is correct when the first live image and the first reference image are substantially, maximally, or essentially aligned.
  • a software module is implemented with a computer program product comprising a computer-readable medium containing computer program code, which can be executed by a computer processor for performing any or all steps, operations, or processes described.
  • systems comprise a computer and/or data storage provided virtually (e.g., as a cloud computing resource).
  • the technology comprises use of cloud computing to provide a virtual computer system that comprises the components and/or performs the functions of a computer as described herein.
  • cloud computing provides infrastructure, applications, and software as described herein through a network and/or over the internet.
  • computing resources e.g., data analysis, calculation, data storage, application programs, file storage, etc.
  • a network e.g., the internet; and/or a cellular network.
  • Embodiments of the technology may also relate to an apparatus for performing the operations herein.
  • This apparatus may be specially constructed for the required purposes and/or it may comprise a general-purpose computing device selectively activated or reconfigured by a computer program stored in the computer.
  • Such a computer program may be stored in a non-transitory, tangible computer readable storage medium or any type of media suitable for storing electronic instructions, which may be coupled to a computer system bus.
  • FIG. 1A is a drawing in an oblique view of a patient support showing the axes of a coordinate system.
  • FIG. 1B is a drawing in a side view of a patient support showing the axes of a coordinate system.
  • FIG. 1C is a schematic drawing showing a data structure for a scene.
  • FIG. 1D is a schematic drawing showing a data structure for a scene.
  • FIG. 2A is a drawing in a top view of a medical imaging and/or radiotherapy system comprising an optical guidance tracking system (comprising a number of cameras) and a patient positioning system (e.g., comprising a patient support) comprising a patient.
  • FIG. 2B is a drawing in a side view of a medical imaging and/or radiotherapy system comprising an optical guidance tracking system (comprising a number of cameras) and a patient positioning system (e.g., comprising a patient support) comprising a patient.
  • FIG. 2A is a drawing in a top view of a medical imaging and/or radiotherapy system comprising an optical guidance tracking system (comprising a number of cameras) and a patient positioning system (e.g., comprising a patient support) comprising a patient.
  • an optical guidance tracking system comprising a number of cameras
  • a patient positioning system e.g., compris
  • FIG. 2C is a drawing in a top view of a medical imaging and/or radiotherapy system comprising an optical guidance tracking system (comprising a number of cameras) and a patient positioning system (e.g., comprising a patient support) comprising a patient.
  • FIG. 2D is a drawing in a side view of a medical imaging and/or radiotherapy system comprising an optical guidance tracking system (comprising a number of cameras) and a patient positioning system (e.g., comprising a patient support) comprising a patient.
  • FIG. 2E is a schematic drawing showing three mutually orthogonal cameras.
  • FIG. 2F is a schematic drawing showing five cameras.
  • FIG. 3 is a schematic drawing showing three mutually orthogonal cameras and the corrections (e.g., translations in the X, Y, and/or Z directions; and/or rotations around the X, Y, and/or Z axes) of a patient position that can be derived from each camera view.
  • FIG. 4 is a drawing in an oblique view of a patient support indicating configurable components of the patient support.
  • FIG. 5A is a rendering of a treatment room comprising an upright patient positioning and imaging system.
  • FIG. 5B is a drawing showing a clinical installation of the technology described herein.
  • the drawing shows an upright imaging and/or treatment system, four orthogonal cameras in a horizontal plane, and an overhead camera above the isocenter.
  • FIG. 5C shows a design of an embodiment of an OGTS system provided in a treatment room 710, an OGTS control room 720, and an OGTS technical room 730.
  • FIG. 5D shows a view of section A from FIG.5C showing the treatment room 710.
  • FIG. 5E shows a view of section B from FIG.5C showing the control room 720 and technical room 730.
  • FIG. 6 shows a graphic user interface (GUI) of the OGTS software as displayed on a display of a client computer.
  • GUI graphic user interface
  • FIG. 7 shows the GUI of the OGTS software showing a person in a patient support and with the cameras zoomed in to a specified region of interest for each camera.
  • FIG. 8 shows the GUI of the OGTS software displaying two scene selection panels. The left panel shows setup scenes that were previously recorded (e.g., comprising reference images) and that may be retrieved for use. The right panel shows treatment scenes (e.g., comprising reference images) that were recorded.
  • FIG. 9 shows the GUI of the OGTS software during a tracking session using a reference image and a live tracking image.
  • FIG. 10A is a flow chart showing steps of an embodiment of a method for pre- treatment patient immobilization and imaging.
  • FIG. 10B is a flow chart showing steps of an embodiment of a method for initial patient immobilization. Methods for initial patient immobilization may be performed, e.g., for a new patient, for a new treatment of a patient, for a treatment of a new region of a patient, or for a treatment of a patient in a new patient posture.
  • FIG. 10A is a flow chart showing steps of an embodiment of a method for pre- treatment patient immobilization and imaging.
  • FIG. 10B is a flow chart showing steps of an embodiment of a method for initial patient immobilization. Methods for initial patient immobilization may be performed, e.g., for a new patient, for a new treatment of a patient, for a treatment of a new region of a patient, or for a treatment of a patient in a new patient posture.
  • FIG. 10A is a flow chart showing
  • FIG. 10C is a flow chart showing steps of an embodiment of a method for subsequent patient immobilization. Methods for initial subsequent immobilization may be performed, e.g., when a patient position, a PPA or patient support configuration, and/or an imaging position have previously been determined and saved in a configuration setup scene, patient position setup scene, and/or imaging position scene, respectively.
  • FIG. 10D is a flow chart showing steps of an embodiment of a method for obtaining a pre-treatment CT scan of a patient. ASTO-41250.601
  • FIG. 11A is a flow chart showing steps of an embodiment of a method for treating a patient with radiation.
  • FIG. 11B is a flow chart showing steps of an embodiment of a method for loading a patient on a PPA or patient support appropriate for treating the patient.
  • FIG. 11C is a flow chart showing steps of an embodiment of a method for imaging a patient for treatment, e.g., by obtaining a treatment CT scan of the patient.
  • FIG. 11D is a flow chart showing steps of an embodiment of a method for treating a patient with radiation.
  • DETAILED DESCRIPTION Provided herein is technology relating to medical imaging and radiological treatment and particularly, but not exclusively, to methods and systems for monitoring the safe movement of a patient positioning system, for verifying the setup of a patient positioning system, for verifying the setup of a patient on the patient positioning system, and for monitoring patient position during medical imaging and/or radiological treatment.
  • the technology provided herein is an optical guidance and tracking system (OGTS) comprising multiple (e.g., 3, 4, or 5) high-resolution (e.g., approximately 20 megapixels) optical cameras of which three or more of the cameras are positioned to be mutually orthogonal to one another.
  • OGTS optical guidance and tracking system
  • the OGTS simultaneously obtains multiple (e.g., at least three) high-resolution two-dimensional images of a patient from at least three directions, which provides synchronous live views from at least three orthogonal directions. Accordingly, the use of orthogonal live images provides an improved imaging technology relative to conventional technologies that obtain images and subsequently construct a three-dimensional image.
  • the OGTS finds use in methods comprising recording reference images from a plurality (e.g., 2, 3, 4, or 5) of the OGTS cameras when an object (e.g., a patient) is at a desired position and orientation. Further, methods comprise saving the reference images ASTO-41250.601 to provide saved reference images for each camera. Reference images may be grouped into scenes that may be recalled as needed.
  • methods comprise tracking the position of the object (e.g., the patient) by obtaining live images from the plurality of cameras and overlaying the live images on the saved reference images for each camera.
  • methods for repositioning the object comprise retrieving saved reference images (e.g., as part of a saved scene) from storage, obtaining live images from the plurality of cameras, overlaying the live images on the saved reference images for each camera. Differences in the live and overlaid images may be used to determine appropriate translations and rotations for repositioning the object (e.g., the patient) to reproduce the position in the reference images.
  • numeric ranges includes the endpoints and each intervening number therebetween with the same degree of precision.
  • the numbers 7 and 8 are contemplated in addition to 6 and 9, and for the range 6.0–7.0, the numbers 6.0, 6.1, 6.2, 6.3, 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, and 7.0 are explicitly contemplated.
  • the suffix “-free” refers to an embodiment of the technology that omits the feature of the base root of the word to which “-free” is appended. That is, the term “X-free” as used herein means “without X”, where X is a feature of the technology omitted in the “X-free” technology.
  • a “calcium-free” composition does not comprise calcium
  • a “mixing-free” method does not comprise a mixing step, etc.
  • first”, “second”, “third”, etc. may be used herein to describe various steps, elements, compositions, components, regions, layers, and/or sections, these steps, elements, compositions, components, regions, layers, and/or sections should ASTO-41250.601 not be limited by these terms, unless otherwise indicated. These terms are used to distinguish one step, element, composition, component, region, layer, and/or section from another step, element, composition, component, region, layer, and/or section. Terms such as “first”, “second”, and other numerical terms when used herein do not imply a sequence or order unless clearly indicated by the context.
  • a first step, element, composition, component, region, layer, or section discussed herein could be termed a second step, element, composition, component, region, layer, or section without departing from technology.
  • the word “presence” or “absence” is used in a relative sense to describe the amount or level of a particular entity (e.g., component, action, element). For example, when an entity is said to be “present”, it means the level or amount of this entity is above a pre-determined threshold; conversely, when an entity is said to be “absent”, it means the level or amount of this entity is below a pre-determined threshold.
  • the pre-determined threshold may be the threshold for detectability associated with the particular test used to detect the entity or any other threshold.
  • an “increase” or a “decrease” refers to a detectable (e.g., measured) positive or negative change, respectively, in the value of a variable relative to a previously measured value of the variable, relative to a pre-established value, and/or relative to a value of a standard control.
  • An increase is a positive change preferably at least 10%, more preferably 50%, still more preferably 2-fold, even more preferably at least 5-fold, and most preferably at least 10-fold relative to the previously measured value of the variable, the pre-established value, and/or the value of a standard control.
  • a decrease is a negative change preferably at least 10%, more preferably 50%, still more preferably at least 80%, and most preferably at least 90% of the previously measured value of the variable, the pre-established value, and/or the value of a standard control.
  • Other terms indicating quantitative changes or differences, such as “more” or “less,” are used herein in the same fashion as described above.
  • a “system” refers to a plurality of real and/or abstract components operating together for a common purpose.
  • a “system” is an integrated assemblage of hardware and/or software components.
  • each component of the system interacts with one or more other components and/or is related to one or more other components.
  • a system refers to a combination of components and software for controlling and directing ASTO-41250.601 methods.
  • a “system” or “subsystem” may comprise one or more of, or any combination of, the following: mechanical devices, hardware, components of hardware, circuits, circuitry, logic design, logical components, software, software modules, components of software or software modules, software procedures, software instructions, software routines, software objects, software functions, software classes, software programs, files containing software, etc., to perform a function of the system or subsystem.
  • the methods and apparatus of the embodiments may take the form of program code (e.g., instructions) embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, flash memory, or any other machine-readable storage medium wherein, when the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the embodiments.
  • the computing device In the case of program code execution on programmable computers, the computing device generally includes a processor, a storage medium readable by the processor (e.g., volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device.
  • One or more programs may implement or utilize the processes described in connection with the embodiments, e.g., through the use of an application programming interface (API), reusable controls, or the like.
  • API application programming interface
  • Such programs are preferably implemented in a high-level procedural or object-oriented programming language to communicate with a computer system.
  • the program(s) can be implemented in assembly or machine language, if desired.
  • the language may be a compiled or interpreted language, and combined with hardware implementations.
  • CT computed tomography
  • CT computed tomography
  • CT refers to numerous forms of CT, including but not limited to x-ray CT, positron emission tomography (PET), single-photon emission computed tomography (SPECT), and photon counting computed tomography.
  • computed tomography comprises use of an x-ray source and a detector that rotates around a patient and subsequent reconstruction of images into different planes.
  • the x-ray source is a static source and the patient is rotated with respect to the static source.
  • structured to [verb] means that the identified element or assembly has a structure that is shaped, sized, disposed, coupled, and/or configured to ASTO-41250.601 perform the identified verb.
  • a member that is “structured to move” is movably coupled to another element and includes elements that cause the member to move or the member is otherwise configured to move in response to other elements or assemblies.
  • structured to [verb] recites structure and not function.
  • structured to [verb] means that the identified element or assembly is intended to, and is designed to, perform the identified verb.
  • association means that the elements are part of the same assembly and/or operate together or act upon/with each other in some manner. For example, an automobile has four tires and four hub caps. While all the elements are coupled as part of the automobile, it is understood that each hubcap is “associated” with a specific tire.
  • coupled refers to two or more components that are secured, by any suitable means, together.
  • the statement that two or more parts or components are “coupled” shall mean that the parts are joined or operate together either directly or indirectly, e.g., through one or more intermediate parts or components.
  • directly coupled means that two elements are directly in contact with each other.
  • fixedly coupled or “fixed” means that two components are coupled so as to move as one while maintaining a constant orientation relative to each other. Accordingly, when two elements are coupled, all portions of those elements are coupled.
  • a description, however, of a specific portion of a first element being coupled to a second element, e.g., an axle first end being coupled to a first wheel, means that the specific portion of the first element is disposed closer to the second element than the other portions thereof.
  • an object resting on another object held in place only by gravity is not “coupled” to the lower object unless the upper object is otherwise maintained substantially in place. That is, for example, a book on a table is not coupled thereto, but a book glued to a table is coupled thereto.
  • the term “removably coupled” or “temporarily coupled” means that one component is coupled with another component in an essentially temporary manner. That is, the two components are coupled in such a way that the joining or separation of the components is easy and does not damage the components. Accordingly, “removably coupled” components are readily uncoupled and recoupled without damage to the components.
  • the term “operatively coupled” means that a number of elements or assemblies, each of which is movable between a first position and a second position, or a first configuration and a second configuration, are coupled so that as the first element ASTO-41250.601 moves from one position/configuration to the other, the second element moves between positions/configurations as well. It is noted that a first element is “operatively coupled” to another without the opposite being true.
  • the term “rotatably coupled” refers to two or more components that are coupled in a manner such that at least one of the components is rotatable with respect to the other.
  • the term “translatably coupled” refers to two or more components that are coupled in a manner such that at least one of the components is translatable with respect to the other.
  • the term “temporarily disposed” means that a first element or assembly is resting on a second element or assembly in a manner that allows the first element/assembly to be moved without having to decouple or otherwise manipulate the first element. For example, a book simply resting on a table, e.g., the book is not glued or fastened to the table, is “temporarily disposed” on the table.
  • the term “correspond” indicates that two structural components are sized and shaped to be similar to each other and are coupled with a minimum amount of friction.
  • an opening which “corresponds” to a member is sized slightly larger than the member so that the member may pass through the opening with a minimum amount of friction.
  • This definition is modified if the two components are to fit “snugly” together. In that situation, the difference between the size of the components is even smaller whereby the amount of friction increases. If the element defining the opening and/or the component inserted into the opening are made from a deformable or compressible material, the opening may even be slightly smaller than the component being inserted into the opening.
  • a “path of travel” or “path,” when used in association with an element that moves, includes the space an element moves through when in motion. As such, any element that moves inherently has a “path of travel” or “path.”
  • the statement that two or more parts or components “engage” one another shall mean that the elements exert a force or bias against one another either directly or through one or more intermediate elements or components. Further, as used herein with regard to moving parts, a moving part may “engage” another element during the motion from one position to another and/or may “engage” another element once in the described position.
  • the statements, “when ASTO-41250.601 element A moves to element A first position, element A engages element B,” and “when element A is in element A first position, element A engages element B” are equivalent statements and mean that element A either engages element B while moving to element A first position and/or element A engages element B while in element A first position.
  • the term “operatively engage” means “engage and move.” That is, “operatively engage” when used in relation to a first component that is structured to move a movable or rotatable second component means that the first component applies a force sufficient to cause the second component to move. For example, a screwdriver is placed into contact with a screw.
  • the screwdriver When no force is applied to the screwdriver, the screwdriver is merely “coupled” to the screw. If an axial force is applied to the screwdriver, the screwdriver is pressed against the screw and “engages” the screw. However, when a rotational force is applied to the screwdriver, the screwdriver “operatively engages” the screw and causes the screw to rotate. Further, with electronic components, “operatively engage” means that one component controls another component by a control signal or current. As used herein, the term “orthogonal” means perpendicular, essentially perpendicular, or substantially perpendicular. Two orthogonal components or elements (e.g., objects, lines, line segments, vectors, or axes) meet at an angle of 90° at their point of intersection.
  • the term “number” shall mean one or an integer greater than one (e.g., a plurality).
  • “[x] moves between its first position and second position,” or, “[y] is structured to move [x] between its first position and second position,” “[x]” is the name of an element or assembly.
  • a “radial side/surface” for a circular or cylindrical body is a side/surface that extends about, or encircles, the center thereof or a height line passing through the center thereof.
  • an “axial side/surface” for a circular or cylindrical body is a side that extends in a plane extending generally perpendicular to a height line passing through the center.
  • a “diagnostic” test includes the detection or identification of a disease state or condition of a subject, determining the likelihood that a subject will ASTO-41250.601 contract a given disease or condition, determining the likelihood that a subject with a disease or condition will respond to therapy, determining the prognosis of a subject with a disease or condition (or its likely progression or regression), and determining the effect of a treatment on a subject with a disease or condition.
  • a diagnostic can be used for detecting the presence or likelihood of a subject having a cancer or the likelihood that such a subject will respond favorably to a compound (e.g., a pharmaceutical, e.g., a drug) or other treatment.
  • a condition refers generally to a disease, malady, injury, event, or change in health status.
  • treating or “treatment” with respect to a condition refers to preventing the condition, slowing the onset or rate of development of the condition, reducing the risk of developing the condition, preventing, or delaying the development of symptoms associated with the condition, reducing or ending symptoms associated with the condition, generating a complete or partial regression of the condition, or some combination thereof.
  • treatment comprises exposing a patient or a portion thereof (e.g., a tissue, organ, body part, or other localized region of a patient body) to radiation (e.g., electromagnetic radiation, ionizing radiation).
  • radiation e.g., electromagnetic radiation, ionizing radiation.
  • the term “beam” refers to a stream of radiation (e.g., electromagnetic wave and/or or particle radiation).
  • the beam is produced by a source and is restricted to a small-solid angle.
  • the beam is collimated.
  • the beam is generally unidirectional.
  • the beam is divergent.
  • the term “patient” or “subject” refers to a mammalian animal that is identified and/or selected for imaging and/or treatment with radiation. Accordingly, in some embodiments, a patient or subject is contacted with a beam of radiation, e.g., a primary beam produced by a radiation source. In some embodiments, the patient or subject is a human. In some embodiments, the patient or subject is a veterinary or farm animal, a domestic animal or pet, or animal used for clinical research. In some embodiments, the subject or patient has cancer and/or the subject or patient has either been recognized as having or at risk of having cancer.
  • a beam of radiation e.g., a primary beam produced by a radiation source.
  • the patient or subject is a human.
  • the patient or subject is a veterinary or farm animal, a domestic animal or pet, or animal used for clinical research.
  • the subject or patient has cancer and/or the subject or patient has either been recognized as having or at risk of having cancer.
  • treatment volume or “imaging volume” refers to the volume (e.g., tissue) of a patient that is selected for imaging and/or treatment with radiation.
  • the “treatment volume” or “imaging volume” comprises a tumor in a cancer patient.
  • the term “healthy tissue” refers to the volume (e.g., tissue) of a patient that is not and/or does not comprise ASTO-41250.601 the treatment volume.
  • the imaging volume is larger than the treatment volume and comprises the treatment volume.
  • the term “radiation source” or “source” refers to an apparatus that produces radiation (e.g., ionizing radiation) in the form of photons (e.g., described as particles or waves).
  • a radiation source is a linear accelerator (“linac”) that produces x-rays or electrons to treat a cancer patient by contacting a tumor with the x-ray or electron beam.
  • the source produces particles (e.g., photons, electrons, neutrons, hadrons, ions (e.g., protons, carbon ions, other heavy ions)).
  • the source produces electromagnetic waves (e.g., x-rays and gamma rays having a wavelength in the range of approximately 1 pm to approximately 1 nm). While it is understood that radiation can be described as having both wave-like and particle-like aspects, it is sometimes convenient to refer to radiation in terms of waves and sometimes convenient to refer to radiation in terms of particles. Accordingly, both descriptions are used throughout without limiting the technology and with an understanding that the laws of quantum mechanics provide that every particle or quantum entity is described as either a particle or a wave.
  • the term “static source” refers to a source that does not revolve around a patient during use of the source for imaging or therapy.
  • a “static source” remains fixed with respect to an axis passing through the patient while the patient is being imaged or treated. While the patient may rotate around said axis to produce relative motion between the static source and rotating patient that is equivalent to the relative motion of a source revolving around a static patient, a static source does not move with reference to a third object, frame of reference (e.g., a treatment room in which a patient is positioned), or patient axis of rotation during imaging or treatment, while the patient is rotated with respect to said third object, said frame of reference (e.g., said treatment room in which said patient is positioned), or patient axis of rotation through the patient during imaging or treatment.
  • a third object frame of reference
  • said frame of reference e.g., said treatment room in which said patient is positioned
  • a static source may be installed on a mobile platform and the static source may move with respect to the Earth and fixtures on the Earth as the mobile platform moves to transport the static source.
  • the term “static source” may refer to a mobile “static source” provided that the mobile “static source” does not revolve around an axis of rotation through the patient during imaging or treatment of the patient. Further, the static source may translate and/or revolve around the patient to position the static source prior to imaging or treatment of the patient or after imaging or treatment of the patient.
  • the term “static source” may refer to a source that translates or revolves around the patient in non-imaging and non- ASTO-41250.601 treatment use, e.g., to position the source relative to the patient when the patient is not being imaged and/or treated.
  • the “static source” is a photon source and thus is referred to as a “static photon source”.
  • Embodiments of the technology described herein relate to locations in space, translations along axes, and/or rotations around axes.
  • a three- dimensional coordinate system is used that comprises an X axis, a Y axis, and a Z axis defined with respect to a patient support and/or a patient. See FIG.1A and FIG.1B.
  • embodiments use a coordinate system in which the X axis and Y axis together are in and/or define a horizontal plane and the Z axis is and/or defines a vertical axis.
  • the X axis is a left-right, horizontal, or frontal axis; the Y axis is an anteroposterior, dorsoventral, or sagittal axis; and the Z axis is a sagittal or longitudinal axis.
  • the X axis and the Y axis together are in and/or define a horizontal, transverse, and/or axial plane.
  • the Y axis and the Z axis together are in and/or define a sagittal or longitudinal plane.
  • the X axis and the Z axis together are in and/or define a frontal or coronal plane. Accordingly, in some embodiments, descriptions of movements as “forward” or “backward” are movements along the Y axis; descriptions of movements as “left” or “right” are movements along the X axis; and descriptions of movements as “up” and “down” are movements along the Z axis. Furthermore, a rotation described as “roll” is rotation around the Y axis; a rotation described as “pitch” is rotation around the X axis; and a rotation described as “yaw” is rotation around the Z axis.
  • Angles of rotations around the X, Y, and Z axes may be designated ⁇ (psi), ⁇ (phi), and ⁇ (theta), respectively.
  • technologies are described as having six degrees of freedom, e.g., translations along one or more of the X, Y, and/or Z axes; and/or rotations around one or more of the X, Y, and/or Z axes.
  • Adjustments or changes in position by translation in the X, Y, and Z directions, respectively, may be denoted by ⁇ X, ⁇ Y, and ⁇ Z.
  • Adjustments or changes in position by rotation around the X, Y, and Z axes, respectively, may be denoted by ⁇ , ⁇ , and ⁇ .
  • the term “scene” refers to a reference image and/or set of reference images, information identifying the camera(s) that captured the reference image and/or set of reference images, and a region of interest setting for each camera that captured the reference image and/or set of reference images.
  • the scene may be saved in a non-volatile storage medium and retrieved from the non-volatile storage medium to provide a retrieved scene. For instance, the scene may be saved when ASTO-41250.601 reference images are obtained and saved for a patient position setup, patient positioner configuration setup, or imaging setup.
  • Each image or set of reference images may be saved with associated scene information identifying the camera that obtained the reference image and the region of interest of the camera when it obtained the reference image.
  • Information used for identifying a camera may be any information that unambiguously identifies a camera and that is persistently or essentially persistently associated with a camera (e.g., at least between capture of a reference image with the camera and use of the reference image and the camera for alignment of a patient positioner and/or patient for imaging).
  • information used for identifying a camera is, e.g., a hardware address (e.g., an Ethernet address, a media access control (MAC) address, a static internet protocol (IP) address, or other identifier (e.g., “camera 1”, “camera 2”, “camera 3”, “camera 4”, “camera 5”, etc.)
  • the region of interest of a camera may indicate a zoom factor of a lens of a camera or may identify a subset of camera sensor pixels (e.g., an area of the sensor array comprising a subset of pixels) shown or saved as an image (e.g., a “digital zoom”).
  • the region of interest setting associated with a camera describes an area of a camera sensor that provided the image pixels saved in the reference image.
  • the region of interest setting may subsequently be used to select the same area of the camera sensor for providing live video to superimpose over the saved reference image.
  • the scene information identifying the list of selected cameras and the region of interest of each camera identifies the cameras and the sensor pixels of each of the cameras to be used in obtaining and displaying live tracking images of the patient for alignment with the reference images obtained previously using the list of selected cameras and region of interest of each selected camera.
  • FIG.1C is a schematic drawing showing an exemplary embodiment of a data record of a scene 500 comprising a number of images (image 511, image 512, image 513, image 514, and image 515 (e.g., at least three of which show views that are mutually orthogonal to each other)), a list of cameras 520 used to record the images (e.g., a list of unambiguous identifiers each associated one-to-one with a physical camera of the OGTS), and a list 530 of regions of interest of each camera of the list of cameras.
  • the scene may be retrieved to provide a number of reference images.
  • FIG.1C shows a scene comprising five images and associated camera and ROI information for five cameras
  • the technology is not limited to a scene comprising five images.
  • a scene may comprise 1, 2, 3, 4, or 5 images and associated information identifying 1, 2, 3, 4, or 5 cameras and 1, 2, 3, 4, or 5 ROIs.
  • particular embodiments relate to saving and retrieving scenes comprising three images ASTO-41250.601 taken from three mutually orthogonal cameras, a list of the three mutually orthogonal cameras, and a list comprising each ROI for each of the three mutually orthogonal cameras that was used to produce each of the three images. See FIG.1D.
  • a sensor comprises a pixel area comprising a matrix of N ⁇ M elements called pixels or photosites, where N is the number of columns and M is the number of rows.
  • Each pixel comprises a photosensitive region for accumulating incoming light energy in the form of electric charge and transistors that control operation of the pixel and provide the information from the pixel to a microprocessor and/or memory.
  • region of interest refers to a portion of a sensor that is selected for producing an image (e.g., for displaying in a window of the OGTS GUI).
  • a sensor comprises an array of pixels (also known as photosites) and the region of interest defines a subarray (e.g., a rectangular subarray) of pixels of the sensor that is read by a processor to form an image. Selecting a subarray of pixels of a sensor may also be termed “windowing”. Selecting an ROI is advantageous to provide a zoomed-in image of an object (e.g., patient) or portion of an object (e.g., portion of a patient).
  • Selecting an ROI is also advantageous to increase the frame rate of displaying and refreshing an image in a window of the OGTS GUI. Decreasing the number of pixels to read, transmit, and display by selecting an ROI increases the throughput of displaying images per unit time for a constant throughput of pixels per unit time. Performing analysis or calculations on a subset of pixels similarly increases the efficiency of performing analysis or calculations on images. For instance, a gigabit rate of data transmission provides a frame rate of approximately 2 frames per second for a 20-megapixel image. However, selecting an ROI of 4,000,000 pixels (e.g., described by approximately 100 megabits of data) increases the frame rate to approximately 10 frames per second.
  • the region of interest is defined by indicating a first pixel and a second pixel that define opposite corners of a rectangular subarray of pixels of the sensor. In some embodiments, the region of interest is defined by indicating a pixel defining one corner of a rectangular subarray of pixels of the sensor and a height and width of the subarray of pixels. In some embodiments, the region of interest is a shape (e.g., a regular or irregular shape) defined by indicating one or more pixels that ASTO-41250.601 define the perimeter of the region of interest. In some embodiments, a selection circuit is used to provide control signals to the selected pixels of the ROI.
  • the OGTS finds use with a patient positioning apparatus (e.g., as described in U.S. Pat. No.11,529,109 (PATIENT POSITIONING APPARATUS) incorporated herein by reference) and/or a patient positioning system comprising a patient support (e.g., as described in U.S. Pat. App. Ser. No.17/894,335 (PATIENT POSITIONING SYSTEM), incorporated herein by reference).
  • the technology provided herein finds use with a patient support that is a component or subsystem of a patient positioning apparatus or a patient positioning system as described in U.S. Pat. No.11,529,109 and/or U.S. Pat. App. Ser. No.
  • the OGTS finds use with a medical imaging apparatus (e.g., a magnetic resonance imaging apparatus, a CT scanner (e.g., as described in U.S. Pat. App. Ser. No. 17/535,091, incorporated herein by reference), etc.)
  • a radiation therapy apparatus e.g., a radiation source (e.g., a stationary radiation source)
  • particle therapy e.g., photon (e.g., x-ray) and/or hadron (e.g., proton) therapy.
  • An exemplary patient positioning apparatus or patient support is shown in FIG.1A and FIG. 1B with an associated coordinate system.
  • FIG.2A to FIG.2D A medical imaging and treatment system comprising a patient positioning apparatus or patient support, a patient, and components of the OGTS is shown in FIG.2A to FIG.2D.
  • an OGTS 200 comprising a patient positioning system 220 (e.g., comprising a patient 900) and a camera system.
  • the OGTS finds use with a medical imaging device (e.g., a CT scanner 210).
  • the OGTS finds use with a ASTO-41250.601 radiation source (e.g., a static radiation source) to provide a therapeutic particle (e.g., photon, proton) beam to a patient.
  • a therapeutic particle e.g., photon, proton
  • the OGTS has a main X axis 801 (FIG. 2A and FIG.2C), a main Y axis 802 (FIG. 2A and FIG.2C), and a main Z axis 803 (FIG.2B and FIG.2D).
  • the camera system comprises an overhead camera 235 (FIG. 2B and FIG. 2D) and one or more peripheral cameras 231, 232, 233, and/or 234 (FIG. 2A and 2B). In some embodiments, any two, any three, or all four of the cameras 231, 232, 233, and/or 234 are provided in the systems described herein.
  • the camera system comprises an overhead camera 235 and at least two peripheral cameras 231, 232, 233, and/or 234 (FIG. 2A to FIG.2D). In some embodiments, the camera system comprises an overhead camera 235 and four peripheral cameras 231, 232, 233, and 234 (FIG. 2A to FIG. 2D). In some embodiments, the overhead camera 235 is placed on the main Z axis 803 of the patient positioning system 220 and/or of the CT scanner 210.
  • the camera system comprises two (e.g., at least two (e.g., two, three, or four)) peripheral cameras spaced at an interval of 90° (e.g., substantially and/or essentially 90°) around the periphery of a patient positioning system 220 (e.g., comprising a patient 900) and/or CT scanner 210 (FIG. 2A and FIG.2C).
  • a patient positioning system 220 e.g., comprising a patient 900
  • CT scanner 210 FIG. 2A and FIG.2C
  • the cameras are used to monitor and/or correct the position of an object, e.g., using the overhead camera to view and/or correct rotation errors of the object around the Z axis and to view and/or correct translation errors in X or Y directions; and using one or more peripheral cameras to view and/or correct translation errors in the X, Y, or Z directions and/or to view and/or correct rotation errors around the X, Y, or Z axes.
  • An advantage of the technology is the ability to obtain a correction (e.g., a 4-degree correction) for the position of an object using three orthogonal camera views and previously saved reference images without requiring mathematical calculations to determine transformation in space.
  • the object is a patient.
  • the technology is not limited in the placement of the peripheral cameras provided that two of the peripheral cameras are orthogonal to each other and each of the peripheral cameras is orthogonal to the overhead camera.
  • Embodiments comprising exemplary placements of the cameras are described below. Placement of the cameras may be adapted from these exemplary positions to accommodate components of an imaging system, radiotherapy system, or patient positioning with which the OGTS is used.
  • ASTO-41250.601 For example, e.g., as shown in FIG.
  • embodiments provide a camera system comprising two, three, or four peripheral cameras 231, 232, 233, and/or 234 spaced at an interval of 90° (e.g., substantially and/or essentially 90°) around the periphery of the patient positioning system 220 (e.g., comprising a patient 900) and/or CT scanner 210.
  • two, three, or four peripheral cameras 231, 232, 233, and/or 234 are placed on the major axes X and/or Y of the patient positioning system 220 (FIG. 2C and FIG. 2D).
  • one or two peripheral cameras 232 and/or 234 are hidden from view by components of the CT scanner 210 (e.g., as shown by the dotted rectangles in FIG.2C).
  • One or two peripheral cameras 231 and/or 233 are provided in front of and/or behind the patient positioning system 220.
  • the rear peripheral camera 233 if present, is occluded from view by the patient positioning system 220 and/or by the patient 900.
  • an object of interest e.g., patient 900
  • an object of interest may face one of the cameras.
  • embodiments provide for detecting transverse movements of the object using the camera toward which the object faces and detecting longitudinal movements in other cameras (e.g., two other cameras) that are orthogonal to the camera toward which the object is facing.
  • embodiments provide that two, three, or four of the peripheral cameras 231, 232, 233, and/or 234 are placed at positions that are displaced from the main X axis 801 or main Y axis 802 of the patient positioning system 220 and/or CT scanner 210 by a rotation of 45° (e.g., substantially and/or essentially 45°) in the XY plane around the main Z axis 803.
  • the overhead camera 235 (FIG.2B and FIG.2D) and the one or more peripheral cameras 231, 232, 233, and/or 234 are placed to image the patient positioning system 220 and a patient 900 when positioned on the patient positioning system 220.
  • the overhead camera 235 (FIG. 2B and FIG.2D) and the one or more peripheral cameras 231, 232, 233, and/or 234 provide a number of views of the patient positioning system 220 and a patient 900 when positioned on the patient positioning system 220.
  • the overhead camera 235 is placed to provide an overhead view of the patient positioning system 220 and/or a patient 900, e.g., through a central opening (e.g., bore) of a scanner ring of the CT scanner 220.
  • FIG.2A and FIG.2C show a view of the patient positioning system 220 and a patient 900 as viewed by the overhead camera 235 through the central opening of the scanner ring.
  • a pair of adjacent peripheral cameras provides two ASTO-41250.601 views (e.g., orthogonal views) of the patient positioning system 120 and/or a patient 900 that are used to construct a three-dimensional image of the patient positioning system 220 and/or a patient 900 (e.g., using triangulation). See, e.g., Hartley and Zisserman, Multiple View Geometry in Computer Vision (Cambridge University Press (New York), 2nd edition, 2003, incorporated herein by reference.
  • three or four cameras 231, 232, 233, and/or 234 provide three or four views of the patient positioning system 220 and/or a patient 900 that are used with the view provided by overhead camera 235 to construct a three-dimensional image (e.g., a surface rendering) of the patient positioning system 220 and/or a patient 900.
  • a three-dimensional image e.g., a surface rendering
  • the overhead camera 235 and two adjacent peripheral cameras 231, 232, 233, and/or 234 are positioned in space so that the main axes of the fields of view for the three cameras (e.g., overhead camera 235, peripheral camera 231, and peripheral camera 232; overhead camera 235, peripheral camera 232, and peripheral camera 233; overhead camera 235, peripheral camera 233, and peripheral camera 234; or overhead camera 235, peripheral camera 234, and peripheral camera 231) are mutually orthogonal in three-dimensional space.
  • FIG.2E FIG.2E.
  • the overhead camera 235 is positioned in space so that the main axis of the field of view for overhead camera 235 is perpendicular to each main axis of each field of view of each peripheral camera 231, 232, 233, and 234 (e.g., the main axis of the field of view of overhead camera 235 is perpendicular to the main axis of the field of view of peripheral camera 231, the main axis of the field of view of overhead camera 235 is perpendicular to the main axis of the field of view of peripheral camera 232, the main axis of the field of view of overhead camera 235 is perpendicular to the main axis of the field of view of peripheral camera 233, and the main axis of the field of view of overhead camera 235 is perpendicular to the main axis of the field of view of peripheral camera 234); and each pair of adjacent peripheral cameras (e.g., 231 and 232, 232 and 233, 233 and 234, and 234 and 231) is
  • the OGTS comprises five cameras that are arranged as in FIG. 2F with an overhead camera 235 and four peripheral cameras 231, 232, 233, and 234 that are spaced at 90° intervals and each being at 90° from the overhead camera 235.
  • the arrangement shown in FIG. 2F may be modified to accommodate components of the imaging and medical systems that may impede installation of the full five-camera ASTO-41250.601 system.
  • the technology comprises OGTS systems comprising an overhead camera 235 and 2, 3, or 4 peripheral cameras 231, 232, 233, and/or 234 with at least three cameras being mutually orthogonal.
  • the two-dimensional images provided from three orthogonal views provide for aligning the patient in the XY, XZ, and YZ planes individually.
  • the cameras 231, 232, 233, 234, and/or 235 are typically placed approximately 2.5 to 4.0 meters (e.g., 2.5, 2.6, 2.7, 2.8, 2.9, 3.0, 3.1, 3.2, 3.3, 3.4, 3.5, 3.6, 3.7, 3.8, 3.9, or 4.0 meters) from the patient positioning system 220 and/or patient 900.
  • the cameras 231, 232, 233, 234, and/or 235 are placed approximately 2.5 to 4.0 meters (e.g., 2.5, 2.6, 2.7, 2.8, 2.9, 3.0, 3.1, 3.2, 3.3, 3.4, 3.5, 3.6, 3.7, 3.8, 3.9, or 4.0 meters) from the isocenter of a medical imaging and/or radiation therapy system comprising the patient positioning system 220.
  • the camera system comprises a number of cameras.
  • the cameras have a spatial resolution of 1.0 mm or better (e.g., a spatial resolution of 1.00 mm, 0.95, 0.90, 0.85, 0.80, 0.75, 0.70, 0.65, 0.60, 0.55, 0.50, 0.45, 0.40, 0.35, 0.30, 0.25, 0.20, 0.15, or 0.10 mm), wherein a better or higher resolution refers to a lower spatial resolution.
  • the cameras have a spatial resolution of 0.5 mm or better (e.g., a spatial resolution of 0.50, 0.49, 0.48, 0.47, 0.46, 0.45, 0.44, 0.43, 0.42, 0.41, 0.40, 0.39, 0.38, 0.37, 0.36, 0.35, 0.34, 0.33, 0.32, 0.31, 0.30, 0.29, 0.28, 0.27, 0.26, 0.25, 0.24, 0.23, 0.22, 0.21, 0.20, 0.19, 0.18, 0.17, 0.16, 0.15, 0.14, 0.13, 0.12, 0.11, or 0.10 mm).
  • the cameras have a spatial resolution of 0.5 mm or better (e.g., a spatial resolution of 0.50, 0.49, 0.48, 0.47, 0.46, 0.45, 0.44, 0.43, 0.42, 0.41, 0.40, 0.39, 0.38, 0.37, 0.36, 0.35, 0.34, 0.33, 0.32, 0.31, 0.30, 0.29, 0.28, 0.27, 0.26, 0.25, 0.24, 0.23, 0.22, 0.21, 0.20, 0.19, 0.18, 0.17, 0.16, 0.15, 0.14, 0.13, 0.12, 0.11, or 0.10 mm) in close-up zoom mode.
  • a spatial resolution of 0.5 mm or better e.g., a spatial resolution of 0.50, 0.49, 0.48, 0.47, 0.46, 0.45, 0.44, 0.43, 0.42, 0.41, 0.40, 0.39, 0.38, 0.37, 0.36, 0.35, 0.34, 0.33, 0.32, 0.31, 0.30, 0.29, 0.28, 0.27,
  • the cameras have a refresh rate of at least 5 Hz (e.g., at least 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, or 30 Hz).
  • a high refresh rate is not necessary for some embodiments of the technology, the technology is not limited to cameras having refresh rates of approximately 5 to 30 Hz and encompasses use of cameras having higher refresh rates, e.g., 60 Hz, 120 Hz, or 240 Hz or more.
  • one or more cameras comprises an accelerometer and/or other component (e.g., gyroscope, magnetometer) that identifies the orientation of the camera in space and/or its location.
  • quaternion orientation solutions are provided using accelerometer and gyroscope data to identify the ASTO-41250.601 orientation and/or location of a camera.
  • the OGTS comprises a beacon placed at a static and known location that is used to determine the orientation and/or location of the cameras.
  • the cameras have a color sensor comprising approximately 20 megapixels. For instance, in some embodiments, cameras have a sensor array of 5,496 pixels ⁇ 3,672 pixels and thus have a sensor comprising 20,181,312 pixels (also known as “photosites”) or approximately 20.2 megapixels. The technology is not limited to cameras comprising approximately 20 megapixels.
  • cameras comprise 5 to 10 megapixels (e.g., 5.0, 5.1, 5.2, 5.3, 5.4, 5.5, 5.6, 5.7, 5.8, 5.9, 6.0, 6.1, 6.2, 6.3, 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, 7.7, 7.8, 7.9, 8.0, 8.1, 8.2, 8.3, 8.4, 8.5, 8.6, 8.7, 8.8, 8.9, 9.0, 9.1, 9.2, 9.3, 9.4, 9.5, 9.6, 9.7, 9.8, 9.9, or 10.0 megapixels). In some embodiments, cameras comprise more than 20 megapixels.
  • embodiments comprise use of cameras comprising 20 or more, 30 or more, 40 or more, 50 or more, or 60 or more megapixels.
  • Each sensor pixel transmits an electrical signal corresponding to a number of photons contacting the sensor pixel.
  • the electrical signals are converted to luminance values for all of the sensor pixels.
  • the luminance value of each sensor pixel provides the signal used to produce an image pixel in the resulting image.
  • a camera having a sensor comprising 5,496 sensor pixels ⁇ 3,672 sensor pixels produces an image comprising 5,496 image pixels ⁇ 3,672 image pixels.
  • Embodiments comprise determining or otherwise providing a relationship between a real world distance and the number of pixels (e.g., sensor pixels or image pixels) corresponding to the real world distance.
  • a camera having a 1.0- meter horizontal field of view captured across a sensor comprising 3,672 columns of pixels has a relationship of 0.2723 mm per pixel in the horizontal direction.
  • the same camera having a 1.5-meter vertical field of view captured across a sensor comprising 5,496 rows of pixels has a relationship of 0.2729 mm per pixel in the vertical direction.
  • the relationship between real world distance and number of pixels is approximately 0.27 mm (e.g., approximately 0.3 mm) per pixel. That is, a first image that is offset by one pixel with respect to a second image indicates a translation of an object in the real world of approximately 0.3 mm.
  • the cameras provide images comprising red, green, and blue channels (e.g., RGB images).
  • the cameras are high-resolution gigabit Ethernet (GigE) cameras. Accordingly, in some embodiments, cameras transmit Ethernet frames at a rate of at least one gigabit per second.
  • the cameras are connected to a wireless communications module or a wired communications medium, e.g., optical ASTO-41250.601 fiber (e.g., 1000BASE-X), twisted pair cable (e.g., 1000BASE-T), or shielded balanced copper cable (e.g., 1000BASE-CX).
  • the cameras receive electric power over the same cables used for data transmission (e.g., twisted-pair Ethernet cabling), e.g., the cameras receive electrical power over the Ethernet cables (e.g., power over Ethernet (PoE)); and, in some embodiments, the cameras are powered by a separate external power supply.
  • cameras comprise a number of output interfaces (e.g., a number of GigE output interfaces) with each output interface (e.g., each GigE output interface) providing output for each of a number of image data channels.
  • cameras output image data in a red channel (e.g., comprising image data corresponding to wavelengths of approximately 550–750 nm), a green channel (e.g., comprising image data corresponding to wavelengths of approximately 450–650 nm), and a blue channel (e.g., comprising image data corresponding to wavelengths of approximately 350–550 nm), and the cameras have an output interface (e.g., a GigE output interface) for each of the red, green, and blue channels.
  • a red channel e.g., comprising image data corresponding to wavelengths of approximately 550–750 nm
  • a green channel e.g., comprising image data corresponding to wavelengths of approximately 450–650 nm
  • a blue channel e.g., comprising image data corresponding to wavelengths of approximately 350–550 nm
  • cameras have a red image channel output (e.g., a red image channel GigE output), a green image channel output (e.g., a green image channel GigE output), and a blue image channel output (e.g., a blue image channel GigE output).
  • a pixel comprises three elements to provide each of the red, green, and blue signals for the light contacting the pixel.
  • Each color element is digitized to provide a range of intensity.
  • the intensity of each color is described using 8 bits (1 byte) to create a range of 256 intensity values for each color.
  • each pixel provides 3 bytes of data or 24 bits of data.
  • the cameras are mirrorless cameras complying to the MICRO FOUR THIRDS (MFT) SYSTEM.
  • the cameras comprise a MFT lens.
  • the cameras comprise a motorized zoom/focus/aperture MFT lens, e.g., to provide control of the field of view, to provide sufficient imaging detail, and/or to image all or a substantial portion of a patient and/or a patient positioning system.
  • cameras are mounted to a pan-tilt component comprising a mechanism to pan and/or tilt a camera.
  • the OGTS comprises a virtual pan-tilt module that crops images accordingly and thus replaces a mechanical pan-tilt mechanism.
  • the OGTS comprises a ASTO-41250.601 thermal (e.g., infrared) camera.
  • a thermal camera is used to monitor and/or detect a breathing cycle of a patient.
  • the OGTS comprises a computer.
  • An exemplary computer that finds use in embodiments of the technology described herein is an industrial computer comprising an INTEL CORE I9 central processing unit, 32 gigabytes of random-access memory, one or more non-volatile memory express (NVMe) solid state drives (SSD) for storage of data and instructions, and a graphics processing unit (e.g., an NVIDIA GPU).
  • the computer communicates through an application programming interface with the cameras, e.g., using a generic programming interface.
  • the generic programming interface complies with the GENICAM standard (e.g., GENICAM Version 2.1.1, incorporated herein by reference).
  • the technology relates to systems.
  • systems comprise the OGTS as described herein and a computer, e.g., as described above and in the examples.
  • systems comprise the OGTS, a computer, and a patient positioning system comprising a patient support or patient positioning apparatus.
  • systems comprise an OGTS as described herein and software components and/or hardware components structured to rotate and/or to translate a patient positioning system, patient positioning apparatus, and/or a patient support or configurable component thereof.
  • systems comprise motors engaged with a patient positioning system, patient positioning apparatus, and/or a patient support or configurable component thereof, a power supply, and software configured to supply power to the motors to translate and/or rotate the patient positioning system, patient positioning apparatus, and/or a patient support or configurable component thereof.
  • systems comprise software components structured to perform a method as described herein, e.g., to determine an adjustment (e.g., one or more of a ⁇ X, ⁇ Y, ⁇ Z, ⁇ , ⁇ , and/or ⁇ ) and/or to move (e.g., translate and/or rotate) a patient positioning system, a patient positioning apparatus, a patient support and/or a configurable component thereof.
  • systems comprise an OGTS as described herein and a controller.
  • the OGTS communicates with the controller.
  • the controller activates the OGTS (e.g., activates one or more cameras of the OGTS) and collects one or more image(s) from the one or more cameras.
  • the controller controls the region of interest displayed by one or more ASTO-41250.601 cameras.
  • the controller communicates with a graphic display terminal for displaying live images from one or more cameras.
  • the controller communicates with a graphic display terminal for displaying previously saved reference images (e.g., from a scene).
  • the controller communicates with user input devices such as a keyboard for receiving instructions from a user.
  • the controller has a general computer architecture including one or more processors communicating with a memory for the storage of non- transient control programs.
  • the controller communicates with a memory to store images from one or more cameras, to retrieve reference images previously obtained by one or more cameras, to store a scene identifying one or more cameras and the region of interest setting of the one or more cameras, and/or to retrieve a scene identifying one or more cameras and the region of interest setting of the one or more cameras.
  • systems comprise software configured to perform image recording, image analysis, image storage, image manipulation, image registration, and/or image comparison methods.
  • systems comprise hardware components such as microprocessors, graphics processors, and/or communications buses configured to communicate, record, analyze, store, manipulate, and/or compare images.
  • systems comprise a graphical display comprising a graphical user interface (GUI).
  • GUI graphical user interface
  • the GUI comprises a viewing element that displays an image from a camera.
  • the GUI comprises a number of viewing elements that each displays an image from a camera. See FIG. 6 and FIG.7.
  • the GUI comprises a number of control elements.
  • the GUI may comprise a control element that is used to select cameras (e.g., one, two, three, or four of the peripheral cameras) for providing images in the viewing elements of the GUI.
  • a user can select one or more cameras (e.g., 1, 2, or 3 of the 5 cameras in embodiments comprising 5 cameras) to provide useful camera views of the patient based on the orientation of the patient and the cameras providing the best views of the patient.
  • the GUI comprises a zoom control element for setting the region of interest of a camera providing images in the viewing elements of the GUI. Accordingly, the GUI allows a user to select a region of interest in a view provided by an individual camera to obtain more precise information for the object in a selected region.
  • the system controls the data sent by a camera so ASTO-41250.601 that only data collected within the selected region of interest is sent to a computer for display on the GUI, thus reducing the amount of data transferred from the camera to the computer and providing increased frame rates (e.g., greater than 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30 Hz, or more).
  • the GUI may comprise a capture button that may be clicked by a user to cause a camera to provide an image and to record the image from the camera or to record images from a number of cameras simultaneously.
  • the GUI may comprise an image retrieval control element to select and retrieve a reference image and/or a set of reference images.
  • the GUI may comprise a scene selection control element to select and retrieve a saved scene comprising a reference image and/or set of reference images, information identifying the cameras that captured the reference image and/or set of reference images, and the region of interest setting of each camera that captured the reference image and/or set of reference images. See FIG.8.
  • the GUI provides a button to begin tracking mode. In tracking mode, a set of reference images is displayed in viewing elements on the display. Live images provided by the same cameras using the same region of interest settings that were used to obtain the reference images are superimposed over the reference images within the appropriate viewing elements.
  • a user may interact with the viewing elements using a pointing device (e.g., mouse, track ball, track pad, finger or stylus and touch screen, eye tracking, etc.) to manipulate a cursor displayed on the display.
  • Interacting with the viewing elements may comprise translating and/or rotating a reference image to match the position of a live tracking image superimposed on the reference image.
  • the green and blue RGB components of the reference images are displayed in the viewing elements on the display and the red RGB channel of each of the associated live tracking images is superimposed over the green and blue RGB components of the reference images in the viewing elements on the display.
  • the red and blue RGB components of the reference images are displayed in the viewing elements on the display and the green RGB channel of each of the associated live tracking images is superimposed over the red and blue RGB components of the reference images in the viewing elements on the display.
  • the green and red RGB components of the reference images are displayed in the viewing elements on the display and the blue RGB channel of each of the associated live tracking images is superimposed over the green and red RGB components of the reference images in the viewing elements on the display.
  • a ASTO-41250.601 user may manipulate (e.g., transform and/or rotate) a reference image provided in a viewing element on the display to align the reference and live tracking images.
  • a user may interact with the GUI to draw a reference line or reference mark on a reference image and/or a live tracking image, e.g., to place a reference line or mark on a viewing element on the display. See, e.g., FIG.9.
  • a reference line can be provided on the GUI to intersect with a reference point in a treatment room, e.g., the treatment isocenter, as viewed in a viewing element on the display.
  • a reference line serves a similar function as a laser line used in a treatment room.
  • a reference line may provide a virtual laser line.
  • the reference image or reference mark After setting the reference image or reference mark on one or more viewing elements on the display, the reference image or reference mark provides a fixed reference point that does not move with the images provided on the viewing element. Accordingly, moving the object until a point or marker on the object as viewed in the live tracking image intersects with the fixed reference point on two cameras provides a technology for aligning the object with a fixed reference point in the room.
  • the technology relates to embodiments of methods.
  • the technology provides methods for imaging a patient.
  • the technology provides methods for treating a patient.
  • a method for treating a patient comprises a method for imaging a patient.
  • methods for treating a patient comprise a pre-treatment patient immobilization and imaging phase 1000 (FIG.10A–FIG. 10D) and a treatment phase 2000 (FIG.11A–FIG.11D).
  • the technology provides a method for pre-treatment patient immobilization and imaging 1000 (also known as a “simulation” phase).
  • the method for pre-treatment patient immobilization and imaging 1000 comprises starting 1100 an optical guidance tracking system (OGTS) session.
  • OGTS optical guidance tracking system
  • starting 1100 an OGTS session comprises moving a patient positioning apparatus or a patient support (e.g., a patient support that is a component of a patient positioning system) to a patient loading position.
  • the method for pre-treatment patient immobilization and imaging 1000 comprises determining 1200 if initial patient ASTO-41250.601 immobilization is needed.
  • Initial patient immobilization may be needed, e.g., for a new patient, a new treatment of a patient, a treatment of a new region of a patient, or for a treatment of a patient in a new patient posture.
  • the method for pre-treatment patient immobilization and imaging 1000 comprises performing an initial patient immobilization method 1300 (FIG. 10B). If an initial patient immobilization is not needed (NO), then the method for pre-treatment patient immobilization and imaging 1000 comprises performing a subsequent patient immobilization method 1400 (FIG. 10C). After performing the initial patient immobilization method 1300 or the subsequent patient immobilization method 1400, the pre-treatment patient immobilization method comprises obtaining 1500 a CT scan of the patient (FIG.10D).
  • FIG. 10B shows an embodiment of a method for initial patient immobilization 1300.
  • embodiments of a method for initial patient immobilization 1300 comprise an initialization and tracking initiation step 1310.
  • the initialization and tracking initiation step 1310 comprises setting up the cameras and preparing the OGTS for imaging a patient position setup and/or a configuration setup as described below.
  • the initialization and tracking initiation step 1310 comprises selecting 1311 a number of cameras for use in imaging a patient position setup and/or a configuration setup.
  • at least three cameras e.g., an overhead camera and two peripheral cameras that are mutually orthogonal with each other are selected for use in imaging a patient position setup and/or a configuration setup.
  • four or five cameras are selected (e.g., comprising at least an overhead camera and two peripheral cameras that are mutually orthogonal with each other) for use in imaging a patient position setup and/or a configuration setup.
  • embodiments of the initialization and tracking initiation step 1310 comprise resetting 1312 regions of interest.
  • embodiments of the initialization and tracking initiation step 1310 comprise resetting reference lines or reference marks.
  • the initialization and tracking initiation step 1310 comprises starting 1313 tracking by acquiring video provided by the selected cameras. Tracking 1313 may also include displaying a live image provided by each of the selected cameras on a display in a separate window so that each live image is viewable by a user.
  • the live images show real-time video of the patient position setup and/or configuration setup from multiple, orthogonal views (e.g., from the top and at least two peripheral views) provided by the selected cameras.
  • embodiments of methods ASTO-41250.601 comprise drawing a reference line or reference mark on a live tracking image, e.g., to place a reference line or mark on a viewing element on a display.
  • the reference line or reference mark intersects with a reference point in a treatment room, e.g., the treatment isocenter, as viewed in a viewing element on the display.
  • Embodiments of methods for initial patient immobilization 1300 further comprise loading 1320 a patient (e.g., placing a patient) on the patient positioning apparatus (PPA) or patient support.
  • Embodiments of methods for initial patient immobilization 1300 further comprise determining 1330 a patient posture and configuring the patient positioning apparatus (PPA) or patient support to support the patient posture, e.g., by supporting the patient body to be in a comfortable and stable position appropriate for treatment. Determining 1330 the patient posture may comprise manipulating, guiding, and/or applying a force (e.g., by a technician or by the patient positioning apparatus or patient support) to the patient to provide the patient in a posture appropriate for treatment.
  • a force e.g., by a technician or by the patient positioning apparatus or patient support
  • Configuring the PPA or patient support may comprise moving (e.g., translating and/or rotating) the entire PPA or patient support or may comprise moving (e.g., translating and/or rotating) one or more components of the PPA or patient support (e.g., one or more of a back rest, a seat pan, a shin rest, an arm rest, a head rest, and/or foot braces or heel stops).
  • methods comprise viewing the patient posture and/or position of the PPA or patient support on a live tracking image and adjusting the patient posture and/or position of the PPA or patient support using a reference line or reference mark provided on the live tracking image that marks a reference point in a treatment room, e.g., the treatment isocenter.
  • embodiments of methods for initial patient immobilization 1300 comprise determining 1340 if the patient is ready to continue. Determining 1340 if the patient is ready to continue may comprise asking the patient if she is comfortable, determining that the patient is in a stable posture, determining that the patient is in a posture appropriate for treatment, and/or otherwise confirming that it is appropriate to proceed to subsequent steps of the method for initial patient immobilization 1300. If the patient is not ready to continue (NO), then the step of determining 1330 the patient posture and configuring the patient positioning apparatus (PPA) or patient support to support the patient posture and the step of determining 1340 if the patient is ready to continue are repeated. If the patient is ready to continue (YES), then the method proceeds to saving 1350 the patient position setup scene.
  • PPA patient positioning apparatus
  • Saving 1350 the patient position setup scene primarily records the posture (e.g., patient position) of the patient in a position appropriate for subsequent treatment.
  • Saving 1350 the patient position setup scene comprises saving a list of the selected cameras providing images of the patient position, saving each of the images of the patient position provided by each of the selected cameras, and saving the region of interest that was saved as an image for each of the selected cameras during image acquisition.
  • saving 1350 the patient position setup scene comprises optionally saving patient identifying information, saving the date and time when the patient position setup scene is saved, saving the type of treatment to be performed on the patient in a subsequent treatment phase, saving information identifying the OGTS user performing the method for initial patient immobilization 1300, etc.
  • embodiments of methods for initial patient immobilization 1300 comprise selecting 1360 a number of cameras for use in imaging the patient and PPA or patient support in the imaging position.
  • at least three cameras e.g., an overhead camera and two peripheral cameras that are mutually orthogonal with each other
  • four or five cameras are selected (e.g., comprising at least an overhead camera and two peripheral cameras that are mutually orthogonal with each other) for use in imaging the patient and PPA or patient support in the imaging position.
  • Embodiments of methods for initial patient immobilization 1300 comprise moving 1370 the PPA or patient support to the imaging position.
  • Moving 1370 the PPA or patient support may comprise translating the PPA or patient support along the X, Y, or Z axis and/or rotating the PPA or patient support around one or more of the X, Y, or Z axis.
  • selecting 1360 a number of cameras for use in imaging the patient and/or PPA or patient support in the imaging position is performed prior to moving 1370 the PPA or patient support to the imaging position.
  • moving 1370 the PPA or patient support to the imaging position is performed prior to selecting 1360 a number of cameras for use in imaging the patient and/or PPA or patient support in the imaging position.
  • methods comprise drawing a reference line or reference mark on a live tracking image that marks a reference point in the room, e.g., the treatment isocenter or imaging position.
  • methods comprise viewing the PPA or patient support on a live tracking image and ASTO-41250.601 adjusting the PPA or patient support using the reference line or reference mark provided on the live tracking image that marks the reference point.
  • embodiments of methods for initial patient immobilization 1300 comprise determining 1380 if the patient is ready for imaging. Determining 1380 if the patient is ready for imaging may comprise determining that the PPA or patient support and the patient are in the correct position for imaging.
  • determining 1380 if the patient is ready for imaging may optionally comprise informing the patient that she is positioned for imaging, asking the patient if she is comfortable, determining that the patient is in a stable posture, determining that the patient is in a posture appropriate for treatment, and/or otherwise confirming that it is appropriate to proceed to subsequent steps of the method for initial patient immobilization 1300. If the patient is not ready for imaging (NO), then the step of moving 1370 the PPA or patient support to the imaging position and the step of determining 1380 if the patient is ready for imaging may be repeated. If the patient is ready for imaging (YES), then the method proceeds to saving 1390 the imaging position setup scene.
  • Saving 1390 the imaging position setup scene comprises saving a list of the selected cameras providing images of the imaging position, saving each of the images of the imaging position provided by each of the selected cameras, and saving the region of interest of each of the selected cameras.
  • saving 1390 the imaging position setup scene comprises optionally saving patient identifying information, saving the date and time when the imaging position setup scene is saved, saving the type of treatment to be performed on the patient in a subsequent treatment phase, saving information identifying the OGTS user performing the method for initial patient immobilization 1300, etc.
  • the method comprises obtaining 1500 a CT scan (FIG.10D), e.g., as described below.
  • FIG.10D CT scan
  • FIG. 10C shows an embodiment of a method for subsequent patient immobilization 1400 (e.g., to reproduce a PPA or patient support configuration, patient position, and/or imaging position previously saved in a patient support configuration setup scene, patient position setup scene, and/or imaging position setup scene).
  • embodiments of a method for subsequent patient immobilization 1400 comprise retrieving 1411 a saved configuration setup scene to provide a retrieved configuration setup scene.
  • a saved configuration setup scene was previously saved during performing a method for obtaining 1500 a CT scan of a patient after performing a method for initial patient immobilization 1300 (e.g., comprising saving 1560 a configuration setup scene), e.g., as described below.
  • the retrieved ASTO-41250.601 configuration setup scene comprises saved images of the PPA or patient support configuration (e.g., images showing views of the PPA or patient support from at least three orthogonal directions), a list of cameras that provided the saved images of the PPA or patient support configuration, and the region of interest of each of the selected cameras that provided the images during image acquisition.
  • methods further comprise displaying each of the saved images showing views of the PPA or patient support from at least three orthogonal directions on a display in a separate window so that each orthogonal view is viewable by a user in each of the separate windows.
  • Each saved and displayed image shows a view of the configuration setup provided by the selected cameras listed in the saved configuration setup scene.
  • the images of the retrieved configuration setup scene provide reference images for correctly configuring the PPA patient support.
  • methods comprise configuring 1412 a patient positioning apparatus or patient support.
  • the retrieved configuration setup scene comprises information describing the configuration of the PPA or patient support for use in configuring 1412 the patient positioning apparatus or patient support.
  • the retrieved configuration setup scene comprises information describing the location of the PPA or patient support and/or the position of one or more components of the PPA or patient support (e.g., the position of one or more of a back rest, a seat pan, a shin rest, an arm rest, a head rest, and/or foot braces or heel stops).
  • configuring 1412 a patient positioning apparatus or patient support comprises configuring the PPA or patient support according to a standard preset describing the approximate location of the PPA or patient support and/or the approximate position of one or more components of the PPA or patient support (e.g., the position of one or more of a back rest, a seat pan, a shin rest, an arm rest, a head rest, and/or foot braces or heel stops).
  • a standard preset describing the approximate location of the PPA or patient support and/or the approximate position of one or more components of the PPA or patient support (e.g., the position of one or more of a back rest, a seat pan, a shin rest, an arm rest, a head rest, and/or foot braces or heel stops).
  • embodiments of methods for subsequent patient immobilization 1400 provided herein comprise determining 1420 if the PPA or patient support is configured correctly.
  • determining 1420 if the PPA or patient support is configured correctly comprises using the images of the retrieved configuration setup scene (e.g., displayed on the display) as reference images and live video of the PPA or patient support for correctly configuring the PPA or patient support.
  • the information saved in the configuration setup scene providing the list of cameras that provided the saved images of the PPA or patient support configuration and the region of interest used for each of the selected cameras during image acquisition is used to select ASTO-41250.601 the same cameras to provide live video of the PPA or patient support and to set the region of interest for each of the selected cameras to the same region of interest that was saved for each of the cameras in the saved configuration setup scene.
  • the live video provided by the selected cameras shows the same views (e.g., same orthogonal views) of the same regions of interest of the PPA or patient support that were saved in the saved configuration setup scene and that are provided in the retrieved configuration setup scene.
  • a user interacts with the GUI of the OGTS software to initiate tracking, which acquires live video from each of the selected cameras.
  • methods comprise drawing a reference line or reference mark on a live tracking image that marks a reference point in the room, e.g., the treatment isocenter or imaging position.
  • methods comprise viewing the PPA or patient support on a live tracking image and adjusting the PPA or patient support using the reference line or reference mark provided on the live tracking image that marks the reference point.
  • the live video from each of the selected cameras showing the PPA or patient support is superimposed on the display over the associated reference image of the PPA or patient support previously saved by the same camera.
  • the OGTS software displays the red component of each of the reference images superimposed on (e.g., summed with) the green and blue components of each of the associated live images provided by each of the cameras.
  • the red and green colors in the images align to provide a correctly colored RGB image in the region where the images align.
  • Unaligned regions appear as green or red regions that indicate the PPA or patient support is not in the same position or configuration as the position or configuration shown in the reference images.
  • the OGTS is used to calculate the misalignment between the reference images and the live images.
  • the cameras are calibrated to provide a defined pixel size per mm relationship at the isocenter plane in actual space (e.g., approximately 1 to 5 pixels/mm (e.g., approximately 1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.0, 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9, 3.0, 3.1, 3.2, 3.3, 3.4, 3.5, 3.6, 3.7, 3.8, 3.9, 4.0, 4.1, 4.2, 4.3, 4.4, 4.5, 4.6, 4.7, 4.8, 4.9, or 5.0 pixels/mm).
  • the ⁇ X, ⁇ Y, and/or ⁇ Z displacement(s) appropriate to position and/or configure the PPA or patient support or components thereof to the position in real space recorded by the reference images is obtained by aligning the live and reference images in each camera view, determining the distance in image pixels needed to align the live and reference images, and calculating the ⁇ X, ⁇ Y, and/or ⁇ Z displacement according to the pixel size ASTO-41250.601 per mm relationship.
  • the OGTS software also allows for rotating images to determine ⁇ , ⁇ , and ⁇ to correct for rotations of the patient about the X, Y, and Z axes relative to the reference images previously captured.
  • embodiments comprise using the OGTS to determine 1420 if the PPA or patient support is configured and/or positioned correctly. If the PPA or patient support is not configured or positioned correctly (NO), a user may use the information from the OGTS relating to the ⁇ X, ⁇ Y, and/or ⁇ Z displacement and/or ⁇ , ⁇ , and ⁇ rotations appropriate for configuring the PPA or patient support correctly, and the step of configuring 1412 the PPA or patient support and the step of determining 1420 if the PPA is configured correctly may be repeated. If the PPA or patient support is configured and/or positioned correctly (YES), then the method for subsequent patient immobilization 1400 proceeds to the next step of retrieving 1431 a saved patient position setup scene.
  • embodiments of a method for subsequent patient immobilization 1400 comprise retrieving 1431 a saved patient position setup scene to provide a retrieved patient position setup scene.
  • a saved patient position setup scene was previously saved during performing a method for initial patient immobilization 1300 (e.g., comprising saving 1350 a patient position setup scene).
  • the retrieved patient position setup scene comprises saved images of the patient position (e.g., images showing views of the patient from at least three orthogonal directions), a list of cameras that provided the saved images of the patient position, and the region of interest of each of the selected cameras that provided the images during image acquisition.
  • the images show orthogonal views of the patient positioned on the PPA or patient support in a patient posture appropriate for treatment.
  • methods further comprise displaying each of the saved images showing views of the patient position from at least three orthogonal directions on a display in a separate window so that each orthogonal view is viewable by a user in each of the separate windows.
  • Each saved and displayed image shows a view of the patient position setup provided by the selected cameras listed in the retrieved configuration setup scene.
  • the images of the retrieved patient position setup scene (e.g., displayed on the display) provide reference images for correctly positioning the patient in the appropriate patient position (e.g., patient posture).
  • methods comprise loading 1432 a patient on the PPA or patient support and positioning 1433 the patient on the PPA or patient support.
  • the retrieved patient position setup scene comprises information describing the patient position (e.g., patient posture) and/or the configuration of the PPA ASTO-41250.601 or patient support for use in positioning 1433 the patient in the correct patient position.
  • the retrieved patient position setup scene comprises information describing the location and/or the position of one or more components of the PPA or patient support (e.g., the position of one or more of a back rest, a seat pan, a shin rest, an arm rest, a head rest, and/or foot braces or heel stops) to provide the appropriate patient position.
  • positioning 1433 the patient comprises positioning the patient to a standard patient position (e.g., a standard patient posture) that may be modified as needed by adjusting the configuration of the PPA or patient support (e.g., by adjusting the position of one or more of a back rest, a seat pan, a shin rest, an arm rest, a head rest, and/or foot braces or heel stops).
  • a standard patient position e.g., a standard patient posture
  • embodiments of methods for subsequent patient immobilization 1400 provided herein comprise determining 1440 if the patient is positioned correctly.
  • determining 1440 if the patient is positioned correctly comprises using the images of the retrieved patient position setup scene (e.g., displayed on the display) as reference images and live video of the positioned patient for correctly positioning the patient (e.g., by configuring the PPA or patient support).
  • the information in the retrieved patient position setup scene providing the list of cameras that provided the saved images of the patient position and the region of interest used for each of the selected cameras during image acquisition is used to select the same cameras to provide live video of the patient and to set the region of interest for each of the selected cameras to the same region of interest that was saved for each of the images of the saved patient position setup scene.
  • the live video provided by the selected cameras shows the same views (e.g., same orthogonal views) of the same regions of interest of the patient and patient position that were saved in the saved patient position setup scene and provided in the retrieved patient position setup scene.
  • a user interacts with the GUI of the OGTS software to initiate tracking, which acquires live video from each of the selected cameras.
  • methods comprise drawing a reference line or reference mark on a live tracking image that marks a reference point in the room, e.g., the treatment isocenter or imaging position.
  • methods comprise viewing the patient on the PPA or patient support on the live tracking image and adjusting the patient and/or the PPA or patient support using the reference line or reference mark provided on the live tracking image that marks the reference point.
  • the live video from each of the selected cameras showing the patient position is superimposed on the display over the associated saved reference image of the patient ASTO-41250.601 position previously saved by the same camera.
  • the OGTS software displays the red component of each of the reference images superimposed on (e.g., summed with) the green and blue components of each of the associated live images provided by each of the cameras.
  • the red and green colors in the images align to provide a correctly colored RGB image in the region where the images align. Unaligned regions appear as green or red regions that indicate the patient is not in the same position as the position shown in the reference images.
  • the OGTS is used to calculate the misalignment between the reference images and the live images.
  • the cameras are calibrated to provide a defined pixel size per mm relationship at the isocenter plane in actual space (e.g., approximately 1 to 5 pixels/mm (e.g., approximately 1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.0, 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9, 3.0, 3.1, 3.2, 3.3, 3.4, 3.5, 3.6, 3.7, 3.8, 3.9, 4.0, 4.1, 4.2, 4.3, 4.4, 4.5, 4.6, 4.7, 4.8, 4.9, or 5.0 pixels/mm).
  • the ⁇ X, ⁇ Y, and/or ⁇ Z displacement(s) appropriate to position the patient in real space to match the position recorded by the reference images is obtained by aligning the live and reference images in each camera view, determining the distance in image pixels needed to align the live and reference images, and calculating the ⁇ X, ⁇ Y, and/or ⁇ Z displacement according to the pixel size per mm relationship.
  • the OGTS software also allows for rotating images to determine ⁇ , ⁇ , and ⁇ to correct for rotations of the patient about the X, Y, and Z axes relative to the reference images previously captured. Accordingly, embodiments comprise using the OGTS to determine 1440 if the patient is positioned correctly.
  • a user may use the information from the OGTS relating to the ⁇ X, ⁇ Y, and/or ⁇ Z displacement and/or ⁇ , ⁇ , and ⁇ rotations appropriate for positioning the patient correctly, and the step of positioning 1443 the patient and the step of determining 1440 if the patient is positioned correctly may be repeated. If the patient is positioned correctly (YES), then the method for subsequent patient immobilization 1400 proceeds to the next step of retrieving 1450 a saved imaging position setup scene. As shown in FIG.10C, embodiments of a method for subsequent patient immobilization 1400 comprise retrieving 1450 a saved imaging position setup scene to provide a retrieved imaging position setup scene.
  • a saved imaging position setup scene was previously saved during performing a method for initial patient immobilization 1300 (e.g., comprising saving 1390 an imaging setup scene).
  • the retrieved imaging position setup scene comprises saved images of the patient and PPA or patient support in the imaging position (e.g., images showing views ASTO-41250.601 of the patient and PPA or patient support in the imaging position from at least three orthogonal directions), a list of cameras that provided the saved images of the patient and PPA or patient support in the imaging position, and the region of interest of each of the selected cameras that provided the images during image acquisition.
  • the images show orthogonal views of the patient positioned on the PPA or patient support in a patient posture appropriate for treatment and at the proper imaging position.
  • methods further comprise displaying each of the saved images showing views of the patient and PPA or patient support in the imaging position from at least three orthogonal directions on a display in a separate window so that each orthogonal view is viewable by a user in each of the separate windows.
  • Each saved and displayed image shows a view of the imaging position setup provided by the selected cameras listed in the retrieved imaging position setup scene.
  • the images of the imaging position setup scene (e.g., displayed on the display) provide reference images for correctly positioning the patient in the appropriate imaging position.
  • methods comprise moving 1460 the patient and PPA or patient support to the imaging position.
  • the retrieved imaging position setup scene comprises information describing the imaging position for use in moving 1460 the patient and PPA or patient support to the correct imaging position.
  • embodiments of methods for subsequent patient immobilization 1400 provided herein comprise determining 1470 if the patient is ready for imaging.
  • determining 1470 if the patient is ready for imaging comprises using the images of the retrieved imaging position setup scene (e.g., displayed on the display) as reference images and live video of the patient positioned on the PPA or patient support for correctly positioning the patient on the PPA or patient support for imaging.
  • the information in the retrieved imaging position setup scene providing the list of cameras that provided the saved images of the imaging position and the region of interest used for each of the selected cameras during image acquisition is used to select the same cameras to provide live video of the patient on the PPA or patient support and to set the region of interest for each of the selected cameras to the same region of interest that was saved for each of the images of the imaging position setup scene.
  • the live video provided by the selected cameras shows the same views (e.g., same orthogonal views) of the same regions of interest of the patient on the PPA or patient support that were saved in the imaging position setup scene and provided in the retrieved imaging position scene.
  • a user interacts with the GUI of the OGTS software to initiate tracking, which acquires live video from each of the selected cameras.
  • methods comprise drawing a reference line or reference mark on a live tracking image that marks a reference point in the room, e.g., the treatment isocenter or imaging position.
  • methods comprise viewing the patient and the PPA or patient support on a live tracking image and adjusting the patient and/or PPA or patient support using the reference line or reference mark provided on the live tracking image that marks the reference point.
  • the live video from each of the selected cameras showing the imaging position is superimposed on the display over the associated saved reference image of the imaging position previously saved by the same camera.
  • the OGTS software When in tracking mode, the OGTS software displays the red component of each of the reference images superimposed on (e.g., summed with) the green and blue components of each of the associated live images provided by each of the cameras.
  • the red and green colors in the images align to provide a correctly colored RGB image in the region where the images align.
  • Unaligned regions appear as green or red regions that indicate the patient is not in the same imaging position as the imaging position shown in the reference images.
  • the OGTS is used to calculate the misalignment between the reference images and the live images.
  • the cameras are calibrated to provide a defined pixel size per mm relationship at the isocenter plane in actual space (e.g., approximately 1 to 5 pixels/mm (e.g., approximately 1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.0, 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9, 3.0, 3.1, 3.2, 3.3, 3.4, 3.5, 3.6, 3.7, 3.8, 3.9, 4.0, 4.1, 4.2, 4.3, 4.4, 4.5, 4.6, 4.7, 4.8, 4.9, or 5.0 pixels/mm).
  • a defined pixel size per mm relationship at the isocenter plane in actual space e.g., approximately 1 to 5 pixels/mm (e.g., approximately 1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.0, 2.1, 2.2, 2.3, 2.4,
  • the ⁇ X, ⁇ Y, and/or ⁇ Z displacement(s) appropriate to position the patient and PPA or patient support in real space to match the imaging position recorded by the reference images are obtained by aligning the live and reference images in each camera view, determining the distance in image pixels needed to align the live and reference images, and calculating the ⁇ X, ⁇ Y, and/or ⁇ Z displacement according to the pixel size per mm relationship.
  • the OGTS software also allows for rotating images to determine ⁇ , ⁇ , and ⁇ to correct for rotations of the patient about the X, Y, and Z axes relative to the reference images previously captured. Accordingly, embodiments comprise using the OGTS to determine 1470 if the patient is ready for imaging.
  • a user may use the information from the OGTS relating to the ⁇ X, ⁇ Y, and/or ⁇ Z displacement and/or ⁇ , ⁇ , and ⁇ rotations appropriate for positioning the patient and PPA or patient support correctly for imaging, and the step of moving 1460 the patient and PPA or patient support and the step of determining 1470 if the patient is ready for imaging may be repeated. If the ASTO-41250.601 patient is ready for imaging (YES), then the method for subsequent patient immobilization 1400 proceeds to the next step of obtaining 1500 a CT scan.
  • FIG. 10D shows an embodiment of a method for obtaining 1500 a CT scan (e.g., a pre-treatment CT scan) of a patient, e.g., after performing an initial patient immobilization method 1300 (FIG. 10B) or performing a subsequent patient immobilization method 1400 (FIG. 10C).
  • embodiments of methods for obtaining 1500 a CT scan of a patient comprise starting 1510 a CT scan, e.g., to obtain a CT scan of the patient.
  • a CT scan is obtained using a multi-axis medical imaging apparatus as described in U.S. Pat. App. Pub. No. 2022/0183641, which is incorporated herein by reference.
  • methods for obtaining 1500 a CT scan of a patient comprise determining 1520 if the CT scan is completed. In some embodiments, determining 1520 if the CT scan is completed comprises determining if the CT scan is of adequate quality for treatment planning and treatment of the patient. If the CT scan is not completed (NO), then the steps of starting 1510 the CT scan and determining 1520 if the CT scan is completed are repeated. If the CT scan is completed (YES), then the method for obtaining 1500 a CT scan of a patient comprises saving the CT scan as a pre-treatment CT scan of the patient to be used subsequently for treatment planning and treatment. Next, the method proceeds to moving 1530 the PPA or patient support to an unloading position and unloading 1540 the patient from the PPA or patient support.
  • the method for obtaining 1500 a CT scan of a patient was performed after performing an initial patient immobilization method 1300, extra care is taken during the unloading 1540 to minimally disturb the PPA or patient support and thus preserve the configuration of the PPA or patient support previously determined in step 1330 of the initial patient immobilization method 1300.
  • the method for obtaining 1500 a CT scan of a patient was performed after performing a subsequent patient immobilization method 1400 (SUBSEQUENT)
  • the OGTS session is ended. If the method for obtaining 1500 a CT scan of a patient was performed after performing an initial patient immobilization method 1300, then the method comprises saving 1560 the configuration setup scene.
  • Saving 1560 the configuration setup scene primarily records the configuration of the PPA or patient support for subsequent use in supporting a patient during treatment.
  • Saving 1560 the configuration setup scene comprises saving a list of the selected cameras providing images of the PPA or patient support configuration, saving each of the images of the PPA or patient support configuration provided by each of the selected cameras, and saving the region of interest of each of the selected cameras during image acquisition.
  • saving 1560 the configuration setup scene comprises optionally saving patient identifying information, saving the date and time when the configuration setup scene is saved, saving the type of treatment to be performed on the patient in a subsequent treatment phase, saving information identifying the OGTS user performing the method for pre-treatment patient immobilization and imaging 1000, saving information describing the positions of the PPA or patient support or components thereof, etc.
  • embodiments of the technology provided herein relate to methods of treating a patient with radiation 2000. For example, e.g., as shown in FIG. 11A, methods of treating a patient comprise using the OGTS system as described herein for treating a patient.
  • the method shown in FIG.11A comprises starting 2100 an OGTS session, loading 2200 a patient on a PPA or patient support in a position appropriate for treatment (FIG. 11B), imaging 2300 the patient (FIG.11C), treating 2400 the patient (FIG. 11D), and ending the OGTS session.
  • embodiments of methods for loading 2200 a patient on a PPA or patient support in a position appropriate for treatment comprise moving 2210 a PPA or patient support to a loading position.
  • methods for loading 2200 a patient on a PPA or patient support in a position appropriate for treatment comprise retrieving 2220 a saved configuration setup scene to provide a retrieved configuration setup scene.
  • a saved configuration setup scene was previously saved during performing a method for obtaining 1500 a CT scan of a patient after performing a method for initial patient immobilization 1300 (e.g., comprising saving 1560 a configuration setup scene).
  • the retrieved configuration setup scene comprises saved images of the PPA or patient support configuration (e.g., images showing views of the PPA or patient support from at least three orthogonal directions), a list of cameras that provided the saved images of the PPA or patient support configuration, and the region of interest of each of the selected cameras that provided the images during image acquisition.
  • methods further comprise displaying each of the saved images showing views of the PPA or patient support from at least three orthogonal directions on a display in a separate window so that each orthogonal view is viewable by a user in each of the separate windows.
  • Each saved and displayed image shows a view of the configuration setup provided by the selected cameras listed in the saved configuration setup scene.
  • the images of the retrieved configuration setup scene (e.g., displayed on the display) provide reference images for correctly configuring the PPA patient support.
  • methods comprise drawing a reference line ASTO-41250.601 or reference mark on a live tracking image that marks a reference point in the room, e.g., the treatment isocenter.
  • methods comprise viewing the PPA or patient support on the live tracking image and adjusting the PPA or patient support using the reference line or reference mark provided on the live tracking image that marks the reference point.
  • methods for loading 2200 a patient on a PPA or patient support in a position appropriate for treatment comprise configuring 2230 a PPA or patient support.
  • the retrieved configuration setup scene comprises information describing the configuration of the PPA or patient support for use in configuring 2230 the PPA or patient support.
  • the retrieved configuration setup scene comprises information describing the location of the PPA or patient support and/or the position of one or more components of the PPA or patient support (e.g., the position of one or more of a back rest, a seat pan, a shin rest, an arm rest, a head rest, and/or foot braces or heel stops).
  • configuring 2230 a PPA or patient support comprises configuring the PPA or patient support according to a standard preset describing the approximate location of the PPA or patient support and/or the approximate position of one or more components of the PPA or patient support (e.g., the position of one or more of a back rest, a seat pan, a shin rest, an arm rest, a head rest, and/or foot braces or heel stops).
  • a standard preset describing the approximate location of the PPA or patient support and/or the approximate position of one or more components of the PPA or patient support (e.g., the position of one or more of a back rest, a seat pan, a shin rest, an arm rest, a head rest, and/or foot braces or heel stops).
  • embodiments of methods for loading 2200 a patient on a PPA or patient support in a position appropriate for treatment comprise determining 2240 if the PPA or patient support is configured correctly.
  • determining 2240 if the PPA or patient support is configured correctly comprises using the images of the retrieved configuration setup scene (e.g., displayed on the display) as reference images and live video of the PPA or patient support for correctly configuring the PPA patient support.
  • the information saved in the configuration setup scene providing the list of cameras that provided the saved images of the PPA or patient support configuration and the region of interest used for each of the selected cameras during image acquisition is used to select the same cameras to provide live video of the PPA or patient support and to set the region of interest for each of the selected cameras to the same region of interest that was saved for each of the cameras in the saved configuration setup scene.
  • the live video provided by the selected cameras shows the same views (e.g., same orthogonal views) of the same regions of interest of the PPA or patient support that were saved in the saved configuration setup scene and that are provided in the retrieved configuration setup scene.
  • a user interacts with the GUI of ASTO-41250.601 the OGTS software to initiate tracking, which acquires live video from each of the selected cameras.
  • methods comprise drawing a reference line or reference mark on a live tracking image that marks a reference point in the room, e.g., the treatment isocenter or imaging position.
  • methods comprise viewing the PPA or patient support on the live tracking image and adjusting the PPA or patient support using the reference line or reference mark provided on the live tracking image that marks the reference point.
  • the live video from each of the selected cameras showing the PPA or patient support is superimposed on the display over the associated reference image of the PPA or patient support previously saved by the same camera.
  • the OGTS software displays the red component of each of the reference images superimposed on (e.g., summed with) the green and blue components of each of the associated live images provided by each of the cameras.
  • the red and green colors in the images align to provide a correctly colored RGB image in the region where the images align.
  • Unaligned regions appear as green or red regions that indicate the PPA or patient support is not in the same position or configuration as the position or configuration shown in the reference images.
  • the OGTS is used to calculate the misalignment between the reference images and the live images.
  • the cameras are calibrated to provide a defined pixel size per mm relationship at the isocenter plane in actual space (e.g., approximately 1 to 5 pixels/mm (e.g., approximately 1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.0, 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9, 3.0, 3.1, 3.2, 3.3, 3.4, 3.5, 3.6, 3.7, 3.8, 3.9, 4.0, 4.1, 4.2, 4.3, 4.4, 4.5, 4.6, 4.7, 4.8, 4.9, or 5.0 pixels/mm).
  • the ⁇ X, ⁇ Y, and/or ⁇ Z displacement(s) appropriate to position and/or configure the PPA or patient support or components thereof to the position in real space recorded by the reference images is obtained by aligning the live and reference images in each camera view, determining the distance in image pixels needed to align the live and reference images, and calculating the ⁇ X, ⁇ Y, and/or ⁇ Z displacement according to the pixel size per mm relationship.
  • the OGTS software also allows for rotating images to determine ⁇ , ⁇ , and ⁇ to correct for rotations of the patient about the X, Y, and Z axes relative to the reference images previously captured.
  • embodiments comprise using the OGTS to determine 2240 if the PPA or patient support is configured and/or positioned correctly. If the PPA or patient support is not configured or positioned correctly (NO), a user may use the information from the OGTS relating to the ⁇ X, ⁇ Y, and/or ⁇ Z displacement and/or ⁇ , ⁇ , and ⁇ rotations appropriate for configuring the ASTO-41250.601 PPA or patient support correctly, and the step of configuring 2230 the PPA or patient support and the step of determining 2240 if the PPA is configured correctly may be repeated.
  • the method for loading 2200 a patient on a PPA or patient support in a position appropriate for treatment proceeds to the next step of retrieving 2250 a saved patient position setup scene.
  • embodiments of methods for loading 2200 a patient on a PPA or patient support in a position appropriate for treatment comprise retrieving 2250 a saved patient position setup scene to provide a retrieved patient position setup scene.
  • a saved patient position setup scene was previously saved during performing a method for initial patient immobilization 1300 (e.g., comprising saving 1350 a patient position setup scene).
  • the retrieved patient position setup scene comprises saved images of the patient position (e.g., images showing views of the patient from at least three orthogonal directions), a list of cameras that provided the saved images of the patient position, and the region of interest of each of the selected cameras that provided the images during image acquisition.
  • the images show orthogonal views of the patient positioned on the PPA or patient support in a patient posture appropriate for treatment.
  • methods further comprise displaying each of the saved images showing views of the patient position from at least three orthogonal directions on a display in a separate window so that each orthogonal view is viewable by a user in each of the separate windows.
  • Each saved and displayed image shows a view of the patient position setup provided by the selected cameras listed in the retrieved configuration setup scene.
  • the images of the retrieved patient position setup scene provide reference images for correctly positioning the patient in the appropriate patient position (e.g., patient posture).
  • methods comprise loading 2260 a patient on the PPA or patient support and positioning 2270 the patient on the PPA or patient support.
  • the retrieved patient position setup scene comprises information describing the patient position (e.g., patient posture) and/or the configuration of the PPA or patient support for use in positioning 2270 the patient in the correct patient position.
  • the retrieved patient position setup scene comprises information describing the location and/or the position of one or more components of the PPA or patient support (e.g., the position of one or more of a back rest, a seat pan, a shin rest, an arm rest, a head rest, and/or foot braces or heel stops) to provide the appropriate patient position.
  • one or more components of the PPA or patient support e.g., the position of one or more of a back rest, a seat pan, a shin rest, an arm rest, a head rest, and/or foot braces or heel stops
  • positioning 2270 the patient comprises ASTO-41250.601 positioning the patient to a standard patient position (e.g., a standard patient posture) that may be modified as needed by adjusting the configuration of the PPA or patient support (e.g., by adjusting the position of one or more of a back rest, a seat pan, a shin rest, an arm rest, a head rest, and/or foot braces or heel stops).
  • a standard patient position e.g., a standard patient posture
  • embodiments of methods for loading 2200 a patient on a PPA or patient support in a position appropriate for treatment comprise determining 2280 if the patient is positioned correctly.
  • determining 2280 if the patient is positioned correctly comprises using the images of the retrieved patient position setup scene (e.g., displayed on the display) as reference images and live video of the positioned patient for correctly positioning the patient (e.g., by configuring the PPA or patient support).
  • the information in the retrieved patient position setup scene providing the list of cameras that provided the saved images of the patient position and the region of interest used for each of the selected cameras during image acquisition is used to select the same cameras to provide live video of the patient and to set the region of interest for each of the selected cameras to the same region of interest that was saved for each of the images of the saved patient position setup scene.
  • the live video provided by the selected cameras shows the same views (e.g., same orthogonal views) of the same regions of interest of the patient and patient position that were saved in the saved patient position setup scene and provided in the retrieved patient position setup scene.
  • a user interacts with the GUI of the OGTS software to initiate tracking, which acquires live video from each of the selected cameras.
  • methods comprise drawing a reference line or reference mark on a live tracking image that marks a reference point in the room, e.g., the treatment isocenter or imaging position.
  • methods comprise viewing the patient on the PPA or patient support on the live tracking image and adjusting the patient and/or the PPA or patient support using the reference line or reference mark provided on the live tracking image that marks the reference point.
  • the live video from each of the selected cameras showing the patient position is superimposed on the display over the associated saved reference image of the patient position previously saved by the same camera.
  • the OGTS software displays the red component of each of the reference images superimposed on (e.g., summed with) the green and blue components of each of the associated live images provided by each of the cameras.
  • the red and green colors in the images align to provide a correctly colored RGB image in the region where the images align.
  • Unaligned regions appear as green or red regions ASTO-41250.601 that indicate the patient is not in the same position as the position shown in the reference images.
  • the OGTS is used to calculate the misalignment between the reference images and the live images.
  • the cameras are calibrated to provide a defined pixel size per mm relationship at the isocenter plane in actual space (e.g., approximately 1 to 5 pixels/mm (e.g., approximately 1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.0, 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9, 3.0, 3.1, 3.2, 3.3, 3.4, 3.5, 3.6, 3.7, 3.8, 3.9, 4.0, 4.1, 4.2, 4.3, 4.4, 4.5, 4.6, 4.7, 4.8, 4.9, or 5.0 pixels/mm).
  • the ⁇ X, ⁇ Y, and/or ⁇ Z displacement(s) appropriate to position the patient in real space to match the position recorded by the reference images is obtained by aligning the live and reference images in each camera view, determining the distance in image pixels needed to align the live and reference images, and calculating the ⁇ X, ⁇ Y, and/or ⁇ Z displacement according to the pixel size per mm relationship.
  • the OGTS software also allows for rotating images to determine ⁇ , ⁇ , and ⁇ to correct for rotations of the patient about the X, Y, and Z axes relative to the reference images previously captured. Accordingly, embodiments comprise using the OGTS to determine 2280 if the patient is positioned correctly.
  • a user may use the information from the OGTS relating to the ⁇ X, ⁇ Y, and/or ⁇ Z displacement and/or ⁇ , ⁇ , and ⁇ rotations appropriate for positioning the patient correctly, and the step of positioning 2270 the patient and the step of determining 2280 if the patient is positioned correctly may be repeated.
  • the method for loading 2200 a patient on a PPA or patient support in a position appropriate for treatment proceeds to the next step of imaging 2300 the patient for treatment.
  • embodiments of a method for imaging 2300 a patient for treatment comprise retrieving 2310 a saved imaging position setup scene to provide a retrieved imaging position setup scene.
  • a saved imaging position setup scene was previously saved during performing a method for initial patient immobilization 1300 (e.g., comprising saving 1390 an imaging setup scene).
  • the retrieved imaging position setup scene comprises saved images of the patient and PPA or patient support in the imaging position (e.g., images showing views of the patient and PPA or patient support in the imaging position from at least three orthogonal directions), a list of cameras that provided the saved images of the patient and PPA or patient support in the imaging position, and the region of interest of each of the selected cameras that provided the images during image acquisition.
  • the images show orthogonal views of the patient positioned on the PPA or patient support in a patient ASTO-41250.601 posture appropriate for treatment and at the proper imaging position.
  • methods further comprise displaying each of the saved images showing views of the patient and PPA or patient support in the imaging position from at least three orthogonal directions on a display in a separate window so that each orthogonal view is viewable by a user in each of the separate windows.
  • Each saved and displayed image shows a view of the imaging position setup provided by the selected cameras listed in the retrieved imaging position setup scene.
  • the images of the imaging position setup scene (e.g., displayed on the display) provide reference images for correctly positioning the patient in the appropriate imaging position.
  • methods for imaging 2300 a patient for treatment comprise moving 2320 the patient and PPA or patient support to the imaging position.
  • the retrieved imaging position setup scene comprises information describing the imaging position for use in moving 2320 the patient and PPA or patient support to the correct imaging position.
  • embodiments of methods for imaging 2300 a patient for treatment comprise determining 2230 if the patient is ready for imaging.
  • determining 2230 if the patient is ready for imaging comprises using the images of the retrieved imaging position setup scene (e.g., displayed on the display) as reference images and live video of the patient positioned on the PPA or patient support for correctly positioning the patient on the PPA or patient support for imaging.
  • the information in the retrieved imaging position setup scene providing the list of cameras that provided the saved images of the imaging position and the region of interest used for each of the selected cameras during image acquisition is used to select the same cameras to provide live video of the patient on the PPA or patient support and to set the region of interest for each of the selected cameras to the same region of interest that was saved for each of the images of the imaging position setup scene.
  • the live video provided by the selected cameras shows the same views (e.g., same orthogonal views) of the same regions of interest of the patient on the PPA or patient support that were saved in the imaging position setup scene and provided in the retrieved imaging position scene.
  • a user interacts with the GUI of the OGTS software to initiate tracking, which acquires live video from each of the selected cameras.
  • methods comprise drawing a reference line or reference mark on a live tracking image that marks a reference point in the room, e.g., the treatment isocenter.
  • methods comprise viewing the patient on the PPA or patient support on the live tracking image and adjusting the patient and/or the PPA or patient support using the ASTO-41250.601 reference line or reference mark provided on the live tracking image that marks the reference point.
  • the live video from each of the selected cameras showing the imaging position is superimposed on the display over the associated saved reference image of the imaging position previously saved by the same camera.
  • the OGTS software When in tracking mode, the OGTS software displays the red component of each of the reference images superimposed on (e.g., summed with) the green and blue components of each of the associated live images provided by each of the cameras.
  • the red and green colors in the images align to provide a correctly colored RGB image in the region where the images align.
  • Unaligned regions appear as green or red regions that indicate the patient is not in the same imaging position as the imaging position shown in the reference images.
  • the OGTS is used to calculate the misalignment between the reference images and the live images.
  • the cameras are calibrated to provide a defined pixel size per mm relationship at the isocenter plane in actual space (e.g., approximately 1 to 5 pixels/mm (e.g., approximately 1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.0, 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9, 3.0, 3.1, 3.2, 3.3, 3.4, 3.5, 3.6, 3.7, 3.8, 3.9, 4.0, 4.1, 4.2, 4.3, 4.4, 4.5, 4.6, 4.7, 4.8, 4.9, or 5.0 pixels/mm).
  • a defined pixel size per mm relationship at the isocenter plane in actual space e.g., approximately 1 to 5 pixels/mm (e.g., approximately 1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.0, 2.1, 2.2, 2.3, 2.4,
  • the ⁇ X, ⁇ Y, and/or ⁇ Z displacement(s) appropriate to position the patient and PPA or patient support in real space to match the imaging position recorded by the reference images are obtained by aligning the live and reference images in each camera view, determining the distance in image pixels needed to align the live and reference images, and calculating the ⁇ X, ⁇ Y, and/or ⁇ Z displacement according to the pixel size per mm relationship.
  • the OGTS software also allows for rotating images to determine ⁇ , ⁇ , and ⁇ to correct for rotations of the patient about the X, Y, and Z axes relative to the reference images previously captured. Accordingly, embodiments comprise using the OGTS to determine 2230 if the patient is ready for imaging.
  • a user may use the information from the OGTS relating to the ⁇ X, ⁇ Y, and/or ⁇ Z displacements and/or ⁇ , ⁇ , and ⁇ rotations appropriate for positioning the patient and PPA or patient support correctly for imaging, and the step of moving 2320 the patient and PPA or patient support and the step of determining 2230 if the patient is ready for imaging may be repeated.
  • the method for imaging 2300 a patient for treatment proceeds to the next step of obtaining a CT scan.
  • embodiments of methods for imaging 2300 a patient for treatment comprise obtaining a CT scan (e.g., a treatment CT scan).
  • ASTO-41250.601 methods for imaging 2300 a patient for treatment comprise starting 2240 a CT scan, e.g., to obtain a CT scan of the patient.
  • a CT scan is obtained using a multi-axis medical imaging apparatus as described in U.S. Pat. App. Pub. No. 2022/0183641, which is incorporated herein by reference.
  • methods for imaging 2300 a patient for treatment comprise determining 2250 if the CT scan is completed.
  • determining 2250 if the CT scan is completed comprises determining if the CT scan is of adequate quality for treatment of the patient. If the CT scan is not completed (NO), then the steps of starting 2240 the CT scan and determining 2250 if the CT scan is completed are repeated.
  • the method for imaging 2300 a patient for treatment comprises saving the CT scan of the patient as a treatment CT scan of the patient to be used subsequently for registration with a pre-treatment CT scan and for subsequent treatment.
  • the methods of treating a patient with radiation 2000 proceed to treating 2400 the patient with radiation.
  • methods of treating 2440 a patient with radiation comprise obtaining a pre-treatment CT scan (e.g., as provided by a method for obtaining 1500 a CT scan of a patient during a pre-treatment patient immobilization and imaging phase 1000, wherein the method comprises saving the CT scan as a pre-treatment CT scan of the patient) and obtaining a treatment CT scan (e.g., as provided by a method for imaging 2300 a patient during a treatment phase 2000, wherein the method comprises saving a CT scan of a patient as a treatment CT scan of the patient).
  • methods comprise registering 2410 the treatment CT scan and the pre-treatment CT scan and obtaining a correction vector.
  • registering 2410 the treatment CT scan and the pre-treatment CT scan provides for matching a treatment plan to the treatment volume of interest within the patient body so that radiation treatment is delivered accurately to the treatment volume of interest.
  • registering 2410 the treatment CT scan and the pre-treatment CT scan provides for detecting and evaluating anatomical changes that may have occurred after obtaining the pre-treatment scan and prior to the treatment phase. Accordingly, registering 2410 the treatment CT scan and the pre-treatment scan provides important information for patient treatment. Differences between the pre-treatment CT scan and the treatment CT scan are used to determine a correction vector to align the treatment plan (e.g., the treatment beam) to contact the treatment volume of interest within the patient body.
  • the treatment plan e.g., the treatment beam
  • methods comprise verifying correct application of the correction vector to position the patient, e.g., as described below.
  • ASTO-41250.601 The technology finds use in treating a number f of treatment fields (e.g., 1, 2, 3, 4, 5, ... , f treatment fields) comprising the treatment volume of interest within the patient body.
  • methods comprise treating a treatment field n, where n is iterated from 1 (treatment field 1) to the number of treatment fields f (treatment field f) to be treated.
  • methods of treating 2440 a patient with radiation comprise selecting 2420 a treatment field n; moving 2430 the patient positioned on the PPA or patient support and applying the correction vector obtained in step 2410 to the radiation treatment plan.
  • methods of treating 2400 a patient with radiation comprise determining 2440 if treatment of the patient at treatment field n is the first instance of treating the patient at treatment field n. If treatment of the patient at treatment field n is the first instance of treating the patient at treatment field n (YES), then methods of treating 2440 a patient with radiation comprise saving 2460 a treatment scene for field n. If treatment of the patient at treatment field n is not the first instance of treating the patient at treatment field n (NO), then methods of treating 2440 a patient with radiation comprise retrieving 2450 a treatment scene for treatment field n.
  • Saving 2460 a treatment scene for treatment field n comprises saving a list of the selected cameras providing images of the patient and PPA or patient support in position for treatment of treatment field n, saving each of the images of the patient and PPA or patient support in position for treatment of treatment field n provided by each of the selected cameras, and saving the region of interest of each of the selected cameras.
  • saving 2460 the treatment scene for treatment field n comprises optionally saving patient identifying information, saving the date and time when the treatment scene for treatment field n is saved, saving the type of treatment to be performed on the patient, saving information identifying the OGTS user performing the method of treating the patient 2440, etc.
  • the method comprises determining 2470 if the patient is ready for treatment as described below.
  • Retrieving 2450 a saved treatment scene for treatment field n provides a retrieved treatment scene for treatment field n.
  • a saved treatment scene for treatment field n was previously saved during performing a method of treating 2400 a patient with radiation (e.g., comprising saving 2460 a saved treatment scene for treatment field n).
  • the retrieved treatment scene for treatment field n comprises saved images of the patient and PPA or patient support in position for treatment of treatment field n (e.g., images showing views of the patient and PPA or patient support in position for treatment of treatment field n from at least three ASTO-41250.601 orthogonal directions), a list of cameras that provided the saved images of patient and PPA or patient support in position for treatment of treatment field n, and the region of interest of each of the selected cameras that provided the images during image acquisition.
  • the images show orthogonal views of the patient and PPA or patient support in position for treatment of treatment field n.
  • methods further comprise displaying each of the saved images showing views of the patient and PPA or patient support in position for treatment of treatment field n from at least three orthogonal directions on a display in a separate window so that each orthogonal view is viewable by a user in each of the separate windows.
  • Each saved and displayed image shows a view of the patient and PPA or patient support in position for treatment of treatment field n provided by the selected cameras listed in the retrieved treatment scene for treatment field n.
  • the images of the retrieved treatment scene for treatment field n (e.g., displayed on the display) provide reference images for correctly positioning the patient and PPA or patient support in the appropriate location and/or position for treatment of treatment field n.
  • embodiments of methods of treating 2400 a patient with radiation comprise determining 2470 if the patient is ready for treatment.
  • determining 2470 if the patient is ready for treatment comprises using the images of the retrieved treatment scene for treatment field n (e.g., displayed on the display) as reference images and live video of the patient and the PPA or patient support for correctly positioning the patient and PPA or patient support in the appropriate location and/or position for treatment of treatment field n.
  • the information in the treatment scene for treatment field n providing the list of cameras that provided the saved images of the patient and PPA or patient support in position for treatment of treatment field n and the region of interest used for each of the selected cameras during image acquisition is used to select the same cameras to provide live video of the patient on the PPA or patient support in position for treatment of treatment field n and to set the region of interest for each of the selected cameras to the same region of interest that was saved for each of the images of the treatment scene for treatment field n.
  • the live video provided by the selected cameras shows the same views (e.g., same orthogonal views) of the same regions of interest of the patient and PPA or patient support in position for treatment of treatment field n that were saved in the treatment scene for treatment field n and provided in the retrieved treatment scene for treatment field n.
  • a user interacts with ASTO-41250.601 the GUI of the OGTS software to initiate tracking, which acquires live video from each of the selected cameras.
  • the live video from each of the selected cameras showing the patient and PPA or patient support in treatment position for treatment of treatment field n is superimposed on the display over the associated saved reference image of the patient and PPA or patient support in treatment position for treatment of treatment field n previously saved by the same camera.
  • the OGTS software When in tracking mode, the OGTS software displays the red component of each of the reference images superimposed on (e.g., summed with) the green and blue components of each of the associated live images provided by each of the cameras.
  • the red and green colors in the images align to provide a correctly colored RGB image in the region where the images align.
  • Unaligned regions appear as green or red regions that indicate the patient is not in the same treatment position for treatment of treatment field n as the treatment position for treatment of treatment field n shown in the reference images.
  • the OGTS is used to calculate the misalignment between the reference images and the live images.
  • the cameras are calibrated to provide a defined pixel size per mm relationship at the isocenter plane in actual space (e.g., approximately 1 to 5 pixels/mm (e.g., approximately 1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.0, 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9, 3.0, 3.1, 3.2, 3.3, 3.4, 3.5, 3.6, 3.7, 3.8, 3.9, 4.0, 4.1, 4.2, 4.3, 4.4, 4.5, 4.6, 4.7, 4.8, 4.9, or 5.0 pixels/mm).
  • a defined pixel size per mm relationship at the isocenter plane in actual space e.g., approximately 1 to 5 pixels/mm (e.g., approximately 1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.0, 2.1, 2.2, 2.3, 2.4,
  • the ⁇ X, ⁇ Y, and/or ⁇ Z displacement(s) appropriate to position the patient and PPA or patient support in real space to match the imaging position recorded by the reference images are obtained by aligning the live and reference images in each camera view, determining the distance in image pixels needed to align the live and reference images, and calculating the ⁇ X, ⁇ Y, and/or ⁇ Z displacement according to the pixel size per mm relationship.
  • the OGTS software also allows for rotating images to determine ⁇ , ⁇ , and ⁇ to correct for rotations of the patient about the X, Y, and Z axes relative to the reference images previously captured. Accordingly, embodiments comprise using the OGTS to determine 2470 if the patient is ready for treatment.
  • a user may use the information from the OGTS relating to the ⁇ X, ⁇ Y, and/or ⁇ Z displacements and/or ⁇ , ⁇ , and ⁇ rotations appropriate for positioning the patient and PPA or patient support correctly for treatment.
  • the step of moving 2430 the patient and PPA or patient support and/or applying the correction vector the step of determining 2440 if treatment of the patient at treatment field n is the first instance of treating the patient at ASTO-41250.601 treatment field n, the appropriate step of saving 2460 the treatment scene for treatment field n or retrieving 2450 a treatment scene for treatment field n, and the step of determining 2470 if the patient is ready for treatment are repeated. If the patient is ready for treatment (YES), then the method of treating 2400 a patient with radiation proceeds to the next step of treating treatment field n.
  • treating treatment field n comprises contacting a region of a patient body within treatment field n with radiation, e.g., photons (e.g., x-rays, electrons) or hadrons (e.g., protons, neutrons, heavy ions (e.g., carbon ions 4 He ions, neon ions, etc.)) as known in the art.
  • methods of treating 2400 a patient with radiation comprise determining 2491 if the treatment of treatment field n is completed.
  • determining 2491 if the treatment of treatment field n is completed comprises determining if the treatment provided the correct amount of radiation to the treatment field n.
  • Methods for monitoring patient motion provide methods for monitoring patient motion, e.g., during treatment.
  • Methods for monitoring patient motion find use in identifying patient movements during a treatment phase that may move a treatment field out of a treatment position or that may move healthy tissue into the path of radiation.
  • Methods for monitoring patient motion also find use in monitoring a rhythmic change in the movement of the body patient, e.g., due to breathing motions.
  • methods for monitoring patient motion comprise using reference image(s) and live video during a treatment phase to monitor and/or identify patient movements that may require intervention by a technician to correct a position of a patient and PPA or patient support.
  • methods for monitoring patient motion comprise providing images of the patient and PPA or patient support in ASTO-41250.601 position for treatment for use as reference images.
  • the reference images are provided by a retrieved treatment scene, e.g., as provided by a method or step of retrieving 2450 a saved treatment scene.
  • the reference images are provided by acquiring images of the patient and PPA or patient support in position for treatment of the patient, e.g., as provided by a method or step of saving 2460 a treatment scene. Accordingly, the reference images provide images of the patient and PPA or patient support in position for treatment, e.g., images showing views of the patient and PPA or patient support in position for treatment from at least three orthogonal directions.
  • methods further comprise displaying each of the saved images showing views of the patient and PPA or patient support in position for treatment from at least three orthogonal directions on a display in a separate window so that each orthogonal view is viewable by a user in each of the separate windows.
  • Each saved and displayed image shows a view of the patient and PPA or patient support in position for treatment provided by the selected cameras listed in the retrieved treatment scene.
  • the images of the treatment scene (e.g., displayed on the display) provide reference images for monitoring patient motion.
  • monitoring patient motion comprises using the reference images and live video of the patient and the PPA or patient support to monitor patient position relative to the reference images.
  • the information associated with the images of the treatment scene providing the list of cameras that provided the reference images and the region of interest used for each of the selected cameras during image acquisition is used to select the same cameras to provide live video of the patient on the PPA or patient support in position and to set the region of interest for each of the selected cameras to the same region of interest that was saved for each of the images of the treatment scene.
  • the live video provided by the selected cameras shows the same views (e.g., same orthogonal views) of the same regions of interest of the patient and PPA or patient support in position for treatment that were saved in the treatment scene.
  • a user interacts with the GUI of the OGTS software to initiate tracking, which acquires live video from each of the selected cameras.
  • the live video from each of the selected cameras showing the patient and PPA or patient support is superimposed on the display over the associated reference image of the patient and PPA or patient support in treatment position.
  • the OGTS software displays the red component of each of the reference images superimposed on (e.g., summed with) the green and blue components of each of the ASTO-41250.601 associated live images provided by each of the cameras.
  • the red and green colors in the images align to provide a correctly colored RGB image in the region where the images align. Unaligned regions appear as green or red regions that indicate the patient is not in the same treatment position as the treatment position shown in the reference images.
  • the OGTS is used to determine a misalignment between the reference images and the live images.
  • the cameras are calibrated to provide a defined pixel size per mm relationship at the isocenter plane in actual space (e.g., approximately 1 to 5 pixels/mm (e.g., approximately 1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.0, 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9, 3.0, 3.1, 3.2, 3.3, 3.4, 3.5, 3.6, 3.7, 3.8, 3.9, 4.0, 4.1, 4.2, 4.3, 4.4, 4.5, 4.6, 4.7, 4.8, 4.9, or 5.0 pixels/mm).
  • the ⁇ X, ⁇ Y, and/or ⁇ Z displacement(s) appropriate to position the patient and PPA or patient support in real space to match the imaging position recorded by the reference images are obtained by aligning the live and reference images in each camera view, determining the distance in image pixels needed to align the live and reference images, and calculating the ⁇ X, ⁇ Y, and/or ⁇ Z displacement according to the pixel size per mm relationship.
  • the OGTS software also allows for rotating images to determine ⁇ , ⁇ , and ⁇ to correct for rotations of the patient about the X, Y, and Z axes relative to the reference images previously captured. Accordingly, embodiments comprise using the OGTS to monitor patient motion by monitoring the alignment of the reference images with the live video.
  • a user may observe the alignment of the reference images with the live video and determine if the reference images and the live video are aligned or are mis- aligned. If the reference images and live video are mis-aligned, the user may determine if an intervention is required to stop treatment and/or re-align the patient.
  • image registration methods e.g., a Lucas-Kanade image alignment algorithm, a Baker-Dellaert-Matthews image alignment algorithm, or the OpenCV image alignment package
  • the OpenCV image alignment package are used to determine if the reference images and the live video are aligned or are mis-aligned.
  • a threshold for mismatch is set (e.g., from 1 to 50 mm (e.g., 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0, 5.5, 6.0, 6.5, 7.0, 7.5, 8.0, 8.5, 9.0, 9.5, 10.0, 10.5, 11.0, 11.5, 12.0, 12.5, 13.0, 13.5, 14.0, 14.5, 15.0, 15.5, 16.0, 16.5, 17.0, 17.5, 18.0, 18.5, 19.0, 19.5, 20.0, 20.5, 21.0, 21.5, 22.0, 22.5, 23.0, 23.5, 24.0, 24.5, 25.0, 25.5, 26.0, 26.5, 27.0, 27.5, 28.0, 28.5, 29.0, 29.5, 30.0, 30.5, 31.0, 31.5, 32.0, 32.5, 33.0, 33.5, 34.0, 34.5, 35.0, 35.5, 36.0, 36.5, 37.0, 37.5, 38.0, 38.5, 39.0, 3
  • methods comprise providing an alert or alarm (e.g., a visual, audio, or haptic alert) if the reference images and live video are mis-aligned more than the threshold value.
  • methods comprise suggesting a correction (e.g., a translation and/or a rotation) that will position the patient correctly so that treatment can proceed. For instance, a user may use the information from the OGTS relating to the ⁇ X, ⁇ Y, and/or ⁇ Z translation and/or ⁇ , ⁇ , and ⁇ rotations appropriate for positioning the patient and PPA or patient support correctly for treatment.
  • methods comprise determining a breathing cycle for a patient and determining appropriate compensatory rhythmic translations and/or rotations of the patient to produce automatically through automated translations and/or rotations of the PPA or patient support.
  • Methods for verifying correct application of a correction vector In some embodiments, the technology provides methods for verifying correct application of the correction vector.
  • methods comprise obtaining a correction vector (e.g., as described herein by obtaining a pre-treatment CT scan (e.g., as provided by a method for obtaining 1500 a CT scan of a patient during a pre-treatment patient immobilization and imaging phase 1000, wherein the method comprises saving the CT scan as a pre-treatment CT scan of the patient); obtaining a treatment CT scan (e.g., as provided by a method for imaging 2300 a patient during a treatment phase 2000, wherein the method comprises saving a CT scan of a patient as a treatment CT scan of the patient); and registering 2410 the treatment CT scan and the pre-treatment CT scan and obtaining a correction vector.
  • methods for verifying correct application of the correction vector comprise providing reference images.
  • the reference images are provided by a retrieved treatment scene, e.g., as provided by a method or step of retrieving 2450 a saved treatment scene.
  • the reference images are provided by acquiring images of the patient and PPA or patient support in position for treatment of the patient, e.g., as provided by a method or step of saving 2460 a treatment scene.
  • methods comprise using the correction vector to calculate a ⁇ X, ⁇ Y, and/or ⁇ Z displacement in real space and/or a ⁇ , ⁇ , and/or ⁇ rotation in real space that is appropriate to align the treatment volume of interest within the patient body to the treatment plan so that radiation treatment is delivered accurately to the treatment volume of interest.
  • the pixel size per mm relationship at the isocenter plane in actual space e.g., approximately 1 to 5 pixels/mm (e.g., approximately 1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.0, 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9, 3.0, 3.1, 3.2, 3.3, 3.4, 3.5, 3.6, 3.7, 3.8, 3.9, 4.0, 4.1, 4.2, 4.3, 4.4, 4.5, 4.6, 4.7, 4.8, 4.9, or 5.0 pixels/mm) is used to calculate an offset (e.g., a translation and/or rotation) for the reference images on the display.
  • an offset e.g., a translation and/or rotation
  • Methods comprise applying the offset to the reference images to provide offset reference images on the display to indicate the proper position of the patient and PPA or patient support for treatment. Accordingly, the offset reference images provide images of the patient and PPA or patient support in proper position for treatment upon application of the correction vector. Properly applying the correction vector to move the patient and PPA or patient support in real space according to the correction vector will align the live video with the displaced reference images, and thus provide a verification that the correction vector has been applied correctly.
  • methods comprise aligning a reference image with a live video image, e.g., aligning a reference image of a PPA or patient support configuration with a live video image of a PPA or patient support configuration, aligning a reference image of a patient position with a live video image of a patient position, aligning a reference image of an imaging position with a live video image of an imaging position, and/or aligning a reference image of a location and/or position of the patient and PPA or patient support for treatment of treatment field n with a live video image of a location and/or position of the patient and PPA or patient support for treatment of treatment field n.
  • methods comprise drawing a reference line or reference mark on a live tracking image that marks a reference point in the room, e.g., the treatment isocenter or imaging position.
  • methods comprise viewing the patient and/or the PPA or patient support on the live tracking image and adjusting the patient and/or PPA or patient support using the reference line or reference mark provided on the live tracking image that marks the reference point. See, e.g., FIG. 9.
  • certain embodiments of the technology are described herein for convenience in terms of methods comprising superimposing and aligning a red component of a reference image and the green and blue components of an associated live image provided by a camera, the technology is not limited to such embodiments.
  • the technology also encompasses essentially equivalent embodiments of methods comprising ASTO-41250.601 superimposing and aligning a green component of a reference image and the red and blue components of an associated live image provided by a camera and embodiments comprising superimposing and aligning a blue component of a reference image and the red and green components of an associated live image provided by a camera.
  • alignment of a live image and a reference image is performed manually by a user interacting with a computer through an input device (e.g., a mouse, keyboard, touch screen, track ball, virtual reality device, etc.) to manipulate (e.g., translate and/or rotate) the live image displayed on the screen and align it with the reference image.
  • an input device e.g., a mouse, keyboard, touch screen, track ball, virtual reality device, etc.
  • a user may use her eye to align the live image and the reference image to determine an adequate match between the live image and the reference image.
  • alignment of a live image and a reference image is performed by image registration methods, e.g., by a Lucas-Kanade image alignment algorithm, a Baker-Dellaert-Matthews image alignment algorithm, or with the OpenCV image alignment package.
  • images are aligned using an automated feature-based alignment, an automated pixel-based alignment, or a Fast Fourier Transform performed by a method encoded in software and performed by a computer.
  • steps of the described image alignment methods are implemented in software code, e.g., a series of procedural steps instructing a computer and/or a microprocessor to produce and/or transform data as described above.
  • software instructions are encoded in a programming language such as, e.g., BASIC, C, C++, Java, MATLAB, Mathematica, Perl, Python, or R.
  • one or more steps or components are provided in individual software objects connected in a modular system.
  • the software objects are extensible and portable.
  • the objects comprise data structures and operations that transform the object data.
  • the objects are used by manipulating their data and invoking their methods. Accordingly, embodiments provide software objects that imitate, model, or provide ASTO-41250.601 concrete entities, e.g., for numbers, shapes, data structures, that are manipulable.
  • software objects are operational in a computer or in a microprocessor. In some embodiments, software objects are stored on a computer readable medium. In some embodiments, a step of a method described herein is provided as an object method. In some embodiments, data and/or a data structure described herein is provided as an object data structure. Embodiments comprise use of code that produces and manipulates software objects, e.g., as encoded using a language such as but not limited to Java, C++, C#, Python, PHP, Ruby, Perl, Object Pascal, Objective-C, Swift, Scala, Common Lisp, and Smalltalk.
  • a language such as but not limited to Java, C++, C#, Python, PHP, Ruby, Perl, Object Pascal, Objective-C, Swift, Scala, Common Lisp, and Smalltalk.
  • the technology relates to a patient positioning system comprising a movable and configurable patient support.
  • the technology relates to a patient positioning system comprising a movable and configurable motorized patient support. See U.S. Pat. No. 11,529,109; see U.S. Pat. App. Ser. No.17/894,335 and U.S. Prov. Pat. App. Ser. No. 63/438,978, each of which is incorporated herein by reference. Certain aspects of embodiments of the patient positioning system and patient support technologies are described below.
  • the patient support is structured to translate in the X, Y, and/or Z directions.
  • the patient support is structured to rotate around the X, Y, and/or Z axes
  • the patient support is configured to move with six degrees-of-freedom, e.g., the patient support is structured to translate in the X, Y, and/or Z directions, and the patient support is structured to rotate around the X, Y, and/or Z axes.
  • the patient support comprises a pivotable base and the patient support is structured to pivot around X, Y, and/or Z axes to provide pitch, roll, and yaw rotations.
  • the configurable patient support is structured to tilt or pivot relative to a horizontal plane of the translatable member or any other fixed horizontal surface.
  • Embodiments comprise motors and drive mechanisms engaged with the patient positioning system and/or with the patient support to translate and/or rotate the patient positioning system and/or to translate and/or rotate the patient support.
  • the patient positioning system comprises a translatable member that is vertically translatable such that the translatable member articulates towards and away from a surface on which the patient positioning system is supported.
  • the translatable member is mounted to a supporting structure that is in turn mounted to the surface.
  • the supporting structure provides stability to the patient positioning system and houses a drive mechanism to effect the vertical movement of the translatable member.
  • the patient support is configured to receive and secure a patient in a generally upright position.
  • the patient support is rotatably mounted to the translatable member such that the patient support is rotatable about a vertical axis (e.g., an essentially vertical and/or substantially vertical axis) relative to the translatable member.
  • a lower end of the patient support is mounted to a rotating disc.
  • an upper end of the patient support is mounted to another rotating disc.
  • the patient support is rotatably mounted to the translatable member such that the patient support is rotatable about a vertical axis.
  • the patient support mounted to the translatable member may similarly articulate vertically.
  • the patient support is translatable in a horizontal (e.g., XY) plane, e.g., in addition to being rotatable about a vertical axis.
  • the patient support is translatable in a horizontal plane orthogonal to the vertical axis of rotation.
  • the patient support comprises two pairs of parallel rails in orthogonal relation, the patient support being slidably connected to a first pair of rails for translation in a first orthogonal direction and the first set of rails being slidably connected to a second pair of rails for translation in a second orthogonal direction.
  • motors and drive mechanisms are engaged with each set of rails to translate the patient support in the X and Y directions.
  • the patient support comprises a back rest, a seat pan, a shin rest, an arm rest, a head rest, and/or foot braces or heel stops.
  • Embodiments provide that the back rest, seat pan, shin rest, arm rest, a head rest, and/or foot braces or heel stops is/are configurable among a number of positions to accommodate patient ingress and/or egress from the patient support system (e.g., from the patient support assembly) and/or to support a patient in a number of positions for imaging or treatment.
  • FIG. 1 the back rest, seat pan, shin rest, arm rest, a head rest, and/or foot braces or heel stops is/are configurable among a number of positions to accommodate patient ingress and/or egress from the patient support system (e.g., from the patient support assembly) and/or to support a patient in a number of positions for imaging or treatment.
  • a configurable patient support 100 comprising one or more configurable and movable components, e.g., a back rest 110 (e.g., a configurable and movable back rest), an arm rest 170 (e.g., a configurable and movable arm rest), a seat pan 140 (e.g., a ASTO-41250.601 configurable and movable seat pan), a shin rest 150 (e.g., a configurable and movable shin rest), and/or a foot brace (e.g., a configurable and movable foot brace) or a heel stop 160 (e.g., a configurable and movable heel stop).
  • a back rest 110 e.g., a configurable and movable back rest
  • an arm rest 170 e.g., a configurable and movable arm rest
  • a seat pan 140 e.g., a ASTO-41250.601 configurable and movable seat pan
  • the patient support (e.g., integrated patient support or non-integrated patient support) further comprises a head rest (e.g., a configurable and movable head rest).
  • a head rest e.g., a configurable and movable head rest.
  • each component e.g., back rest 110, arm rest 170, seat pan 140, shin rest 150, foot brace or heel stop 160, and head rest
  • each component e.g., back rest 110, arm rest 170, seat pan 140, shin rest 150, foot brace or heel stop 160, and head rest
  • each component may be moved (e.g., translated and/or rotated) by a human applying force to the component using her hands and no more than typical force provided by an average human.
  • the patient support 100 comprises one or more motorized components, e.g., a motorized back rest (e.g., a back rest 110 operatively engaged with a back rest motor), a motorized head rest (e.g., a head rest operatively engaged with a head rest motor), a motorized arm rest (e.g., an arm rest 170 operatively engaged with an arm rest motor), a motorized seat pan (e.g., a seat pan 140 operatively engaged with a seat pan motor), a motorized shin rest (e.g., a shin rest 150 operatively engaged with a shin rest motor), and/or a motorized foot brace (e.g., a foot brace operatively engaged with a foot brace motor) or a motorized heel stop (e.g., a heel stop 160 operatively engaged with a heel stop motor).
  • a motorized back rest e.g., a back rest 110 operatively engaged with a back rest motor
  • the back rest motor is structured to move (e.g., translate and/or rotate) the back rest 110
  • the head rest motor is structured to move (e.g., translate and/or rotate) the head rest
  • the arm rest motor is structured to move (e.g., translate and/or rotate) the arm rest 170
  • the seat pan motor is structured to move (e.g., translate and/or rotate) the seam member 140
  • the shin rest motor is structured to move (e.g., translate and/or rotate) the shin rest 150
  • the foot brace motor or heel stop motor is structured to move (e.g., translate and/or rotate) the foot brace or the heel stop motor is structure to move (e.g., translate and/or rotate) the heel stop.
  • the OGTS provides a component (e.g., a computer, a microcontroller, and/or a microprocessor) configured to coordinate control and/or movement (e.g., translation) of the patient support in the X, Y, and/or Z directions.
  • the OGTS provides a component (e.g., a computer, a microcontroller, and/or a microprocessor) configured to coordinate control and/or movement (e.g., rotation) of the patient support around the X, Y, and/or Z axes.
  • the OGTS provides a component (e.g., a computer, a microcontroller, and/or a microprocessor) configured to coordinate control and/or movement (e.g., translation) of the patient support in the X, Y, and/or Z directions and configured to coordinate control and/or movement (e.g., rotation) of the patient support around the X, Y, and/or Z axes.
  • a component e.g., a computer, a microcontroller, and/or a microprocessor
  • the OGTS provides a component (e.g., a computer, a microcontroller, and/or a microprocessor) configured to coordinate control and/or movement of one or more of the motorized components, e.g., the motorized back rest, the motorized head rest, the motorized arm rest, the motorized seat pan, the motorized shin rest, and/or the motorized foot brace or motorized heel stop, to provide the patient support into one or more specific configurations comprising the motorized back rest, the motorized head rest, the motorized arm rest, the motorized seat pan, the motorized shin rest, and/or the motorized foot brace or motorized heel stop in specified positions.
  • a component e.g., a computer, a microcontroller, and/or a microprocessor
  • Coordinating control and/or movement (e.g., translation and/or rotation) of the patient support or one or more configurable component of the patient support comprises providing or removing a current or voltage from a power supply to a motor engaged with the patient support and/or one or more configurable component of the patient support to move the patient support and/or one or more configurable component of the patient support to the appropriate position.
  • the patient positioning system, patient support, and/or one or more of the back rest, the seat pan, the shin rest, the arm rest, the head rest, and/or the foot braces or the heel stops are translated and/or rotated as appropriate to move the patient support and/or the patient to the appropriate position for imaging or treatment.
  • moving e.g., translating and/or rotating
  • the patient positioning system patient support, and/or one or more of the back rest, the seat pan, the shin rest, the arm rest, the head rest, and/or the foot braces or the heel stops
  • motors and drive mechanisms e.g., by providing current and/or voltage to one or more motors
  • moving e.g., translating and/or rotating
  • the patient positioning system e.g., translating and/or rotating
  • the seat pan, the shin rest, the arm rest, the head rest, and/or the foot braces or the heel stops is performed by a user engaging with and moving the patient positioning system, patient support, and/or one or more of the back rest, the seat pan, the shin rest, the arm rest, the head rest, and/or the foot braces or the heel stops.
  • Example 1 – Optical guidance and tracking system An OGTS is provided comprising three cameras as shown in the schematic drawing provided by FIG.3. Camera 1 (“front”) and Camera 2 (“lateral”) are mounted in the horizontal plane (XY plane) and 90 degrees apart. A patient is placed to face Camera 1. Translational disagreements in the left-right (X) and up-down (Z) directions between the live image and reference images and/or rotational disagreements about the Camera 1 axis (Y) between the live image and reference images are seen on Camera 1.
  • Camera 2 is provided at a 90-degree angle to Camera 1, and Camera 2 thus provides a lateral view of the patient.
  • Camera 3 is provided at a position that is orthogonal both to Camera 1 and to Camera 2. Accordingly, Camera 3 is provided at a position that is on a line normal to the horizontal (XY) plane (i.e., above or below the patient). In practical terms, Camera 3 is positioned above the patient and provides an overhead view of the patient.
  • Translational disagreements in the forward-backward (Y) and left-right (X) directions between the live image and reference images and/or rotational disagreements about the Camera 3 axis (Z) between the live image and reference images are seen on Camera 3.
  • aligning the live images and the reference images in each camera view provides the ⁇ X, ⁇ Y, and/or ⁇ Z displacement in real space that is appropriate to position the object (e.g., patient) at the same position where the object (e.g., patient) was positioned when the reference images were obtained and saved.
  • rotations in space can be calculated by analyzing the differences between live camera views and saved images from the three cameras.
  • Example 2 Upright imaging and positioning system
  • the technology relates to an imaging system comprising an upright patient positioning apparatus or a patient positioner of a patient positioning system (see, e.g., FIG. 1A and ASTO-41250.601 FIG. 1B) and an upright helical CT scanner (see, e.g., FIG.2A to 2D).
  • an upright patient positioning apparatus or a patient positioner of a patient positioning system see, e.g., FIG. 1A and ASTO-41250.601 FIG. 1B
  • an upright helical CT scanner see, e.g., FIG.2A to 2D.
  • U.S. Pat. App. Pub. No.2022/0183641 MULTI-AXIS MEDICAL IMAGING (U.S. Pat. App. Ser. No.17/535,091), which is explicitly incorporated herein by reference.
  • the upright patient positioning apparatus or a patient positioner of a patient positioning system provides for positioning a patient in a seated or perched (semi-standing) position while the CT acquires diagnostic quality CT images of the patient in the treatment orientation. See FIG. 5A.
  • the beam delivery system is typically behind the back wall and comprises a high energy x-ray or electron beam delivery system or a particle therapy beam delivery system.
  • the upright imaging and positioning system allows for installing an OGTS comprising five cameras.
  • a typical camera installation in a treatment room is shown in FIG. 5B.
  • Cameras 1, 2, 4, and 5 are installed in the horizontal plane, and Camera 3 is installed directly above the isocenter. As discussed above, cameras 1, 2, and 3 are orthogonal to each other. The technology is not limited by this arrangement.
  • the technology also contemplates embodiments in which cameras 4 and 5 are also orthogonal to each other and each is orthogonal to camera 3.
  • Camera 1 and Camera 2 are closest to the room entrance and are used together with Camera 3 for capturing the reference images of the patient in the setup position and for verifying the patient in the setup position during subsequent imaging and treatment procedures.
  • Camera 4 and Camera 5 are used to verify and track the patient position during imaging and in various treatment fields attained by rotating the patient about the vertical (Z) axis.
  • FIG. 5C shows a design of an embodiment of an OGTS system treatment room 710, control room 720, and technical room 730.
  • FIG.5D and FIG.5E A view of section A showing the treatment room 710 and a view of section B showing the control room 720 and technical room 730 are shown in FIG.5D and FIG.5E, respectively.
  • the treatment room 710 comprises an OGTS, and the OGTS comprises an overhead camera E (235) and four peripheral cameras A, B, C, and D (231, 232, 233, and 234) in exemplary but not limiting positions.
  • the treatment room further comprises a patient positioning system.
  • a technician 901 is shown in the treatment room 710.
  • One inset shows a keyboard, mouse, and display in the treatment room 710 for use by a technician 901, e.g., to control the OGTS and perform the methods described herein; and a second inset shows a keyboard, mouse, and display in the control room 720, e.g., to control the OGTS and perform the methods described herein.
  • the treatment room and control room may each comprise a desk or shelf, a network socket, and a single phase power outlet.
  • the ASTO-41250.601 technical room may comprise computers for data analysis, data storage, computing power, system diagnostics, and/or other computational support for the OGTS system.
  • the technical room comprises a number of network sockets for connecting to components of the OGTS in the treatment room and in the control room.
  • the four peripheral cameras A, B, C, and D (231, 232, 233, and 234) are at a height that is the height of the treatment room isocenter, e.g., with at tolerance of ⁇ 50 mm.
  • each of the four peripheral cameras A, B, C, and D (231, 232, 233, and 234) is positioned from 2300 mm to 6800 mm from the treatment room isocenter.
  • the cameras have a lens range of 2300–3400 mm, 3200–4800 mm, or 4600–6800 mm to show the field of view at 80% to 120%.
  • the OGTS uses 20.2-megapixel cameras (e.g., SVS Vistek EXO 183 TR, 1” sensor format) with a 5,496 ⁇ 3,672 pixel resolution and high quality lenses.
  • the cameras have an exemplary focal length of 25 mm, 35 mm, or 50 mm.
  • the cameras are located at a defined distance from the treatment room isocenter, and the lenses are selected according to the distance between the camera and the treatment room isocenter to attain a field of view at the isocenter plane that is approximately 1.5 m (vertical) ⁇ 1.0 m (horizontal).
  • One client computer may serve as a master client that is used to start new OGTS sessions and stop open sessions on all clients. All other clients may be used to monitor OGTS activities.
  • the graphic user interface (GUI) of the OGTS software as displayed on the client computers is shown in FIG.6.
  • the camera configurations, settings, operational parameters, and/or camera calibrations are set on the host GUI and are described in the OGTS user manual.
  • the GUI allows for selecting two to four cameras installed in the horizontal plane to be displayed ASTO-41250.601 in the left and right windows.
  • the top camera is always selected and is displayed in the top middle window.
  • Camera 1 and Camera are selected – Camera 1 is facing the patient positioning apparatus or patient positioner and Camera 2 shows the patient positioning apparatus or patient positioner from the left side. Camera 3 is above the patient positioning apparatus or patient positioner.
  • software control buttons are shown that are used for a user to start and stop tracking, to start tracking with new reference images, and to save the current viewing configuration as a new recorded scene.
  • the camera images shown FIG.6 are the images provided by the cameras when the cameras are fully zoomed out to provide the full field of view for each camera.
  • These fields of view may be used to capture and verify information describing the configuration of the patient positioning apparatus or patient positioner, e.g., the initial position settings of the patient positioning apparatus or patient positioner before the patient is placed on the patient positioning apparatus or patient positioner and the patient is immobilized in the appropriate posture for imaging or treatment.
  • the initial position settings of the patient positioning apparatus or patient positioner may include information describing the position of the seat pan, the foot rest or heel stop, the shin rest, the back rest, and/or the arm rests.
  • a region of interest (ROI) may be selected in each camera view to zoom in on a specific region in the image (e.g., by selecting a subset of the camera sensor array to display as an image).
  • the ROI may be used to zoom in on a specific region of interest of the patient typically during treatment of the patient.
  • the camera sends the information to the host computer, which provides a fast data transfer rate and hence a faster monitoring repetition rate.
  • a recorded scene contains information identifying the specific viewing environment (e.g., the selected cameras and the region of interest for each camera).
  • FIG. 7 shows a person placed in the patient positioning apparatus or patient positioner with the cameras zoomed in according to different ROIs for each camera.
  • the zoomed-out views of the patient are used to capture the patient posture immobilization devices and the zoomed in or specific ROI views are used to focus on specific regions of interest (e.g., the anatomical region to be treated).
  • the zoomed in views provide higher frame rates from the cameras because the cameras send less information over the network.
  • the OGTS may record and save scenes for use later.
  • the configuration of the OGTS e.g., the selected cameras and ROIs
  • the camera images of the patient and/or patient positioning apparatus or patient positioner are saved as reference images.
  • the scene and reference images may be retrieved to reproduce the patient posture at a subsequent time.
  • Scene selection panels are shown in FIG. 8. Thumbnails of the real images that comprise a scene are shown in a selection dialogue.
  • the left panel shows the setup scenes that were recorded; the right panel show some of the treatment scenes that were recorded. Any of these scenes may be selected during the workflow.
  • Setup scenes are used at the initial stages of the workflow and typically use zoomed out views to provide more information.
  • Treatment scenes are used at later stages in the workflow and use views that are zoomed in. Selecting a scene is illustrated in the right panel of FIG.8. The result of selecting the scene named “treat 7” is illustrated in FIG.9. Selecting a scene automatically initiates a tracking routine.
  • the OGTS software displays the sum of the red component of the RGB reference image and the green and blue components of the live RGB image in each of the camera views.
  • the red and green colors in the images disappear because all portions of the RGB images are in alignment to provide correctly colored RGB image (at least in the region where the images align).
  • the patient head was slightly rotated to the left about the vertical axis resulting in a significant mismatch towards the front of his face. The lateral alignment remained reasonable.
  • the OGTS is used to calculate the misalignment between the reference images and the images in the live scenario.
  • the cameras are calibrated to provide a defined pixel size per mm relationship at the isocenter plane in actual space.
  • cameras used during the development of embodiments of the technology described herein have a ratio of real world distance to camera sensor pixels and/or image pixels of approximately 0.2723 mm/pixel in the horizontal direction and approximately 0.2729 mm/pixel in the vertical direction (e.g., approximately 0.3 mm/pixel), which is equivalent to 3.672 pixels/mm in the horizontal direction and 3.6620 pixels/mm in the vertical direction (e.g., approximately 3.7 pixels/mm).
  • a live image that is misaligned by, e.g., 100 pixels relative to a reference image indicates that the patient should be translated in space by approximately 30 mm in the appropriate plane imaged by the camera to reproduce the set-up position recorded in the reference image.
  • the ⁇ X, ⁇ Y, and/or ⁇ Z displacement appropriate to bring the patient to the position in real space recorded by the reference images can be obtained by aligning the live and reference images in each camera view, determining the distance in image pixels needed to align the live and reference images, and calculating the ⁇ X, ⁇ Y, and/or ⁇ Z displacement according to the pixel size per mm relationship.
  • the OGTS software also allows for rotating images to determine ⁇ , ⁇ , and ⁇ to correct for rotations of the patient about the X, Y, and Z axes relative to the reference images previously captured.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Pathology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

Provided herein is technology relating to medical imaging and radiological treatment and particularly, but not exclusively, to methods and systems for monitoring the safe movement of a patient positioning system, for verifying the setup of a patient positioning system, for verifying the setup of a patient on the patient positioning system, and for monitoring patient position during medical imaging and/or radiological treatment.

Description

ASTO-41250.601 OPTICAL GUIDANCE AND TRACKING FOR MEDICAL IMAGING This application claims the benefit of US provisional patent application serial number 63/462,563, filed April 28, 2023, which is incorporated herein by reference in its entirety. FIELD Provided herein is technology relating to medical imaging and radiological treatment and particularly, but not exclusively, to methods and systems for monitoring the safe movement of a patient positioning system, for verifying the setup of a patient positioning system, for verifying the setup of a patient on the patient positioning system, and for monitoring patient position to detect motion of the patient during medical imaging and/or radiological treatment. BACKGROUND Medical imaging and radiation therapy are widely used in medical diagnosis and treatment of diseases such as cancer. Precisely providing a radiation dose to the portion of the body to be imaged or treated and avoiding exposing healthy tissue to radiation maximizes the therapeutic benefits of imaging and treatment and minimizes the risk of unneeded radiation exposure. Accordingly, medical imaging and radiation therapy procedures typically involve immobilizing a patient in an appropriate position so that the relevant area of the patient body to be imaged and/or treated is stable and static. Comfortable immobilization of a patient can be provided by a patient positioning system that supports the patient body in a position for imaging and/or treatment. Patient positioning systems are configurable apparatuses having a number of movable components that support and immobilize the patient torso, arms, legs, hands, feet, head, and neck in position for imaging and treatment. See, e.g., U.S. Pat. App. Ser. No. 17/894,335 and U.S. Pat. App. Pub. No.20200268327, each of which is incorporated herein by reference. Technologies that help a medical service provider to position and monitor patients are needed to improve patient treatment outcomes and increase the safety and efficiency of medical imaging and radiation treatments. SUMMARY Accurately and reproducibly placing a patient positioning system into position for imaging and treatment and accurately and reproducibly positioning a patient on the patient positioning system for imaging and treatment are both important for achieving positive treatment outcomes. See, e.g., Verhey (1982) “Precise Positioning of Patients for ASTO-41250.601 Radiation Therapy” Int. J. Radiation Oncology Biol. Phys.8: 289–94, incorporated herein by reference. Further, monitoring the location of patient positioning components during movement and setup is important to avoid collisions with other objects and individuals who may be near the patient positioning system during configuration. And, monitoring the location of the patient during imaging and/or treatment and detecting patient movements during imaging and/or treatment improves the accuracy of imaging and/or treatment, decreases exposure of healthy tissue to radiation, and can improve safety of patients (e.g., by detecting a patient fall or errant placement of healthy tissue in a position where it may be exposed to radiation). For example, one neglected aspect of patient positioning for radiation therapy is surface guidance (SG). SG technologies image and monitor the external surface of a patient to provide a preliminary alignment of the patient body before assessing the positions of internal organs. It is fair to say that detecting misalignment of an external surface of a patient is a good indicator that the internal organs of the patient are also misaligned. Thus, SG technologies provide a way to increase the accuracy and efficiency of patient imaging and treatment. Accordingly, in some embodiments, the technology provides an orthogonal implementation of three cameras (e.g., an overhead camera and two peripheral cameras that are mutually orthogonal). In some embodiments, the technology provides for correcting the position of an object (e.g., a patient) position using the cameras, e.g., the overhead camera shows rotation errors about the vertical (Z) axis and shows X and Y translations; and the peripheral cameras show errors in the vertical direction. In some embodiments, the technology provides a first camera that directly faces the object (e.g., patient) such that transverse movements of the object are detected in the first camera view and longitudinal movements are detected by the other two cameras that are orthogonal to the first camera. In some embodiments, the technology provides an ability to obtain a four-degree correction from the three orthogonal cameras, e.g., adjustments in the X and Y directions and rotation around the Z axis are determined by aligning live views from the top camera with a reference image; and adjustments of vertical position are determined by aligning peripheral camera live views with reference images. In some embodiments, reference images are grouped into scenes that can be recalled as needed. In some embodiments, the technology provides an interface that allows a user to select a region of interest from individual camera views to provide precise information for the object in the selected region. In some embodiments, cameras only send data within a selected region of interest to the host computer, thereby reducing the amount of data ASTO-41250.601 that is transferred to the computer and providing fast frame rates. In some embodiments, the technology provides an interface through which a user may select a number of cameras that provide the best views of the object (e.g., patient) based on the orientation of the patient and locations of the cameras. In some embodiments, the technology provides an interface through which a user may draw reference lines on a camera image that do not move with the image (e.g., a vertical and/or a horizontal line). In some embodiments, the lines may be adjusted to intersect with a reference point in the treatment room, e.g., the isocenter. The lines may serve the same purpose as laser lines in the room. Moving the object until a point or marker on the object (which is visible in the live image) intersects with the fixed reference lines on two cameras allows for aligning the object with the reference point in the room. In some embodiments, the technology provides for offsetting a reference image according to a position correction (e.g., correction vector) to be applied to the patient and/or patient support. The mismatch between the offset reference image and the live view will be minimized and/or disappear if the correction vector is applied correctly, thus providing a technology to verify the correct application of the correction vector. For instance, in some embodiments, the technology provides an optical guidance and tracking system (OGTS). In some embodiments, the OGTS comprises an overhead camera; and a first peripheral camera, wherein a field of view of the overhead camera is orthogonal to a field of view of the first peripheral camera. In some embodiments, the OGTS further comprises a second peripheral camera, wherein a field of view of the second peripheral camera is orthogonal to the field of view of the overhead camera; and the field of view of the second peripheral camera is orthogonal to the field of view of the first peripheral camera. In some embodiments, the OGTS further comprises a third peripheral camera, wherein the fields of view of any two of the peripheral cameras and the overhead camera are all mutually orthogonal. In some embodiments, the OGTS further comprises a fourth peripheral camera, wherein the fields of view of any two of the peripheral cameras and the overhead camera are all mutually orthogonal. In some embodiments, the OGTS further comprises a patient support. In some embodiments, the patient support rotates around a vertical (Z) axis. In some embodiments, the field of view of the overhead camera is aligned with the vertical (Z) axis. In some embodiments, the OGTS further comprises a radiation therapy apparatus. In some embodiments, the radiation therapy apparatus comprises a static source. In some embodiments, the OGTS further comprises a computerized tomography ASTO-41250.601 (CT) scanner. In some embodiments, the overhead camera provides a view through a bore of a scanner ring of the CT scanner. In some embodiments, the overhead camera comprises a color sensor array and the first peripheral camera comprises a color sensor array. In some embodiments, the OGTS further comprises a processor and a non- transitory computer-readable medium. In some embodiments, the non-transitory computer-readable medium comprises a program and the processor executes the program to acquire color images from the overhead camera and to acquire images from the peripheral camera. In some embodiments, the OGTS further comprises a display. In some embodiments, the non-transitory computer-readable medium comprises a program and the processor executes the program to superimpose a live video over a reference image on the display. In some embodiments, the non-transitory computer-readable medium comprises a program and the processor executes the program to provide a graphical user interface on the display. In some embodiments, a user interacts with the graphical user interface to identify a region of interest of a camera view. In some embodiments, a user interacts with the graphical user interface to align the live video and the reference image on the display. In some embodiments, the processor calculates an adjustment in real space to position a patient properly for a treatment. In some embodiments, the OGTS further comprises a database comprising a saved scene. In some embodiments, the saved scene comprises an image, information identifying a camera that provided the image, and a region of interest for the image. In some embodiments, the first peripheral camera is located on a major Y axis of the OGTS. In some embodiments, the first peripheral camera is located on a major Y axis of the OGTS and the second peripheral camera is located on a major X axis of the OGTS. In some embodiments, the overhead camera is located on a major Z axis of the OGTS. The technology further provides embodiments of methods. For example, in some embodiments, methods comprise obtaining a first reference image of a patient support and/or a patient; superimposing a first live image of a patient support and/or a patient over the reference image; aligning the first live image and the first reference image to determine a displacement; and moving the patient support and/or the patient according to the displacement. In some embodiments, the first reference image was provided by a first camera and the first live image is provided by the first camera. In some embodiments, methods further comprise obtaining a second reference image of the patient support and/or the patient; and superimposing a second live image of the patient support and/or the patient over the second reference image. In some embodiments, the ASTO-41250.601 second reference image was provided by a second camera and the second live image is provided by the second camera; and wherein a field of view of the second camera is orthogonal to a field of view of the first camera. In some embodiments, the first camera is an overhead camera. In some embodiments, the first camera is a peripheral camera. In some embodiments, aligning the first live image and the first reference image comprises a user interacting with a graphical user interface to align the first live image and the first reference image. In some embodiments, aligning the first live image and the first reference image comprises using an image alignment software to align the first live image and the first reference image. In some embodiments, a saved scene comprises the first reference image. In some embodiments, the saved scene comprises the first reference image, information identifying a camera that provided the first reference image, and a region of interest for the first reference image. In some embodiments, the displacement comprises a translation in the X, Y, and/or Z directions and/or a rotation around the X, Y, and/or Z axes. In some embodiments, methods further comprise determining a relationship between the pixel size of the first camera to a distance in real space. In some embodiments, methods further comprise contacting the patient with radiation. In some embodiments, methods further comprise imaging the patient using computerized tomography. Further embodiments of methods comprise obtaining a first reference image of a patient support and/or a patient; superimposing a first live image of a patient support and/or a patient over the reference image; displacing the reference image according to a correction vector; applying the correction vector to the patient support and/or the patient; and verifying correct application of the correction vector using alignment of the first live image and the first reference image. In some embodiments, application of the correction vector is correct when the first live image and the first reference image are substantially, maximally, or essentially aligned. Some portions of this description describe the embodiments of the technology in terms of algorithms and symbolic representations of operations on information. These algorithmic descriptions and representations are commonly used by those skilled in the data processing arts to convey the substance of their work effectively to others skilled in the art. These operations, while described functionally, computationally, or logically, are understood to be implemented by computer programs or equivalent electrical circuits, microcode, or the like. Furthermore, it has also proven convenient at times to refer to these arrangements of operations as modules, without loss of generality. The described ASTO-41250.601 operations and their associated modules may be embodied in software, firmware, hardware, or any combinations thereof. Certain steps, operations, or processes described herein may be performed or implemented with one or more hardware or software modules, alone or in combination with other devices. In some embodiments, a software module is implemented with a computer program product comprising a computer-readable medium containing computer program code, which can be executed by a computer processor for performing any or all steps, operations, or processes described. In some embodiments, systems comprise a computer and/or data storage provided virtually (e.g., as a cloud computing resource). In particular embodiments, the technology comprises use of cloud computing to provide a virtual computer system that comprises the components and/or performs the functions of a computer as described herein. Thus, in some embodiments, cloud computing provides infrastructure, applications, and software as described herein through a network and/or over the internet. In some embodiments, computing resources (e.g., data analysis, calculation, data storage, application programs, file storage, etc.) are remotely provided over a network (e.g., the internet; and/or a cellular network). Embodiments of the technology may also relate to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes and/or it may comprise a general-purpose computing device selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a non-transitory, tangible computer readable storage medium or any type of media suitable for storing electronic instructions, which may be coupled to a computer system bus. Furthermore, any computing systems referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability. Additional embodiments will be apparent to persons skilled in the relevant art based on the teachings contained herein. BRIEF DESCRIPTION OF THE DRAWINGS These and other features, aspects, and advantages of the present technology will become better understood with regard to the following drawings. FIG. 1A is a drawing in an oblique view of a patient support showing the axes of a coordinate system. ASTO-41250.601 FIG. 1B is a drawing in a side view of a patient support showing the axes of a coordinate system. FIG. 1C is a schematic drawing showing a data structure for a scene. FIG. 1D is a schematic drawing showing a data structure for a scene. FIG. 2A is a drawing in a top view of a medical imaging and/or radiotherapy system comprising an optical guidance tracking system (comprising a number of cameras) and a patient positioning system (e.g., comprising a patient support) comprising a patient. FIG. 2B is a drawing in a side view of a medical imaging and/or radiotherapy system comprising an optical guidance tracking system (comprising a number of cameras) and a patient positioning system (e.g., comprising a patient support) comprising a patient. FIG. 2C is a drawing in a top view of a medical imaging and/or radiotherapy system comprising an optical guidance tracking system (comprising a number of cameras) and a patient positioning system (e.g., comprising a patient support) comprising a patient. FIG. 2D is a drawing in a side view of a medical imaging and/or radiotherapy system comprising an optical guidance tracking system (comprising a number of cameras) and a patient positioning system (e.g., comprising a patient support) comprising a patient. FIG. 2E is a schematic drawing showing three mutually orthogonal cameras. FIG. 2F is a schematic drawing showing five cameras. Four cameras are placed in a plane at intervals of 90°, and the fifth camera is located above the plane of the four cameras and is at a 90° angle to each of the four cameras placed in the plane. FIG. 3 is a schematic drawing showing three mutually orthogonal cameras and the corrections (e.g., translations in the X, Y, and/or Z directions; and/or rotations around the X, Y, and/or Z axes) of a patient position that can be derived from each camera view. FIG. 4 is a drawing in an oblique view of a patient support indicating configurable components of the patient support. FIG. 5A is a rendering of a treatment room comprising an upright patient positioning and imaging system. FIG. 5B is a drawing showing a clinical installation of the technology described herein. The drawing shows an upright imaging and/or treatment system, four orthogonal cameras in a horizontal plane, and an overhead camera above the isocenter. ASTO-41250.601 FIG. 5C shows a design of an embodiment of an OGTS system provided in a treatment room 710, an OGTS control room 720, and an OGTS technical room 730. FIG. 5D shows a view of section A from FIG.5C showing the treatment room 710. FIG. 5E shows a view of section B from FIG.5C showing the control room 720 and technical room 730. FIG. 6 shows a graphic user interface (GUI) of the OGTS software as displayed on a display of a client computer. FIG. 7 shows the GUI of the OGTS software showing a person in a patient support and with the cameras zoomed in to a specified region of interest for each camera. FIG. 8 shows the GUI of the OGTS software displaying two scene selection panels. The left panel shows setup scenes that were previously recorded (e.g., comprising reference images) and that may be retrieved for use. The right panel shows treatment scenes (e.g., comprising reference images) that were recorded. FIG. 9 shows the GUI of the OGTS software during a tracking session using a reference image and a live tracking image. The left and top panels of the GUI show that the head of the person was slightly rotated to the left around the vertical axis, which caused a significant mismatch of the live position towards the front of the face with respect to the previously recorded patient position shown in the reference image. The lateral alignment remained reasonable. FIG. 10A is a flow chart showing steps of an embodiment of a method for pre- treatment patient immobilization and imaging. FIG. 10B is a flow chart showing steps of an embodiment of a method for initial patient immobilization. Methods for initial patient immobilization may be performed, e.g., for a new patient, for a new treatment of a patient, for a treatment of a new region of a patient, or for a treatment of a patient in a new patient posture. FIG. 10C is a flow chart showing steps of an embodiment of a method for subsequent patient immobilization. Methods for initial subsequent immobilization may be performed, e.g., when a patient position, a PPA or patient support configuration, and/or an imaging position have previously been determined and saved in a configuration setup scene, patient position setup scene, and/or imaging position scene, respectively. FIG. 10D is a flow chart showing steps of an embodiment of a method for obtaining a pre-treatment CT scan of a patient. ASTO-41250.601 FIG. 11A is a flow chart showing steps of an embodiment of a method for treating a patient with radiation. FIG. 11B is a flow chart showing steps of an embodiment of a method for loading a patient on a PPA or patient support appropriate for treating the patient. FIG. 11C is a flow chart showing steps of an embodiment of a method for imaging a patient for treatment, e.g., by obtaining a treatment CT scan of the patient. FIG. 11D is a flow chart showing steps of an embodiment of a method for treating a patient with radiation. It is to be understood that the figures are not necessarily drawn to scale, nor are the objects in the figures necessarily drawn to scale in relationship to one another. The figures are depictions that are intended to bring clarity and understanding to various embodiments of apparatuses, systems, and methods disclosed herein. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts. Moreover, it should be appreciated that the drawings are not intended to limit the scope of the present teachings in any way. DETAILED DESCRIPTION Provided herein is technology relating to medical imaging and radiological treatment and particularly, but not exclusively, to methods and systems for monitoring the safe movement of a patient positioning system, for verifying the setup of a patient positioning system, for verifying the setup of a patient on the patient positioning system, and for monitoring patient position during medical imaging and/or radiological treatment. The technology provided herein is an optical guidance and tracking system (OGTS) comprising multiple (e.g., 3, 4, or 5) high-resolution (e.g., approximately 20 megapixels) optical cameras of which three or more of the cameras are positioned to be mutually orthogonal to one another. The OGTS simultaneously obtains multiple (e.g., at least three) high-resolution two-dimensional images of a patient from at least three directions, which provides synchronous live views from at least three orthogonal directions. Accordingly, the use of orthogonal live images provides an improved imaging technology relative to conventional technologies that obtain images and subsequently construct a three-dimensional image. The OGTS finds use in methods comprising recording reference images from a plurality (e.g., 2, 3, 4, or 5) of the OGTS cameras when an object (e.g., a patient) is at a desired position and orientation. Further, methods comprise saving the reference images ASTO-41250.601 to provide saved reference images for each camera. Reference images may be grouped into scenes that may be recalled as needed. In some embodiments, methods comprise tracking the position of the object (e.g., the patient) by obtaining live images from the plurality of cameras and overlaying the live images on the saved reference images for each camera. In some embodiments, methods for repositioning the object (e.g., the patient) comprise retrieving saved reference images (e.g., as part of a saved scene) from storage, obtaining live images from the plurality of cameras, overlaying the live images on the saved reference images for each camera. Differences in the live and overlaid images may be used to determine appropriate translations and rotations for repositioning the object (e.g., the patient) to reproduce the position in the reference images. In this detailed description of the various embodiments, for purposes of explanation, numerous specific details are set forth to provide a thorough understanding of the embodiments disclosed. One skilled in the art will appreciate, however, that these various embodiments may be practiced with or without these specific details. In other instances, structures and devices are shown in block diagram form. Furthermore, one skilled in the art can readily appreciate that the specific sequences in which methods are presented and performed are illustrative and it is contemplated that the sequences can be varied and still remain within the spirit and scope of the various embodiments disclosed herein. All literature and similar materials cited in this application, including but not limited to, patents, patent applications, articles, books, treatises, and internet web pages are expressly incorporated by reference in their entirety for any purpose. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as is commonly understood by one of ordinary skill in the art to which the various embodiments described herein belong. When definitions of terms in incorporated references appear to differ from the definitions provided in the present teachings, the definition provided in the present teachings shall control. The section headings used herein are for organizational purposes only and are not to be construed as limiting the described subject matter in any way. Definitions To facilitate an understanding of the present technology, a number of terms and phrases are defined below. Additional definitions are set forth throughout the detailed description. ASTO-41250.601 Throughout the specification and claims, the following terms take the meanings explicitly associated herein, unless the context clearly dictates otherwise. The phrase “in one embodiment” as used herein does not necessarily refer to the same embodiment, though it may. Furthermore, the phrase “in another embodiment” as used herein does not necessarily refer to a different embodiment, although it may. Thus, as described below, various embodiments of the invention may be readily combined, without departing from the scope or spirit of the invention. In addition, as used herein, the term “or” is an inclusive “or” operator and is equivalent to the term “and/or” unless the context clearly dictates otherwise. The term “based on” is not exclusive and allows for being based on additional factors not described, unless the context clearly dictates otherwise. In addition, throughout the specification, the meaning of “a”, “an”, and “the” include plural references. The meaning of “in” includes “in” and “on.” As used herein, the terms “about”, “approximately”, “substantially”, and “significantly” are understood by persons of ordinary skill in the art and will vary to some extent on the context in which they are used. If there are uses of these terms that are not clear to persons of ordinary skill in the art given the context in which they are used, “about” and “approximately” mean plus or minus less than or equal to 10% of the particular term and “substantially” and “significantly” mean plus or minus greater than 10% of the particular term. As used herein, disclosure of ranges includes disclosure of all values and further divided ranges within the entire range, including endpoints and sub-ranges given for the ranges. As used herein, the disclosure of numeric ranges includes the endpoints and each intervening number therebetween with the same degree of precision. For example, for the range of 6–9, the numbers 7 and 8 are contemplated in addition to 6 and 9, and for the range 6.0–7.0, the numbers 6.0, 6.1, 6.2, 6.3, 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, and 7.0 are explicitly contemplated. As used herein, the suffix “-free” refers to an embodiment of the technology that omits the feature of the base root of the word to which “-free” is appended. That is, the term “X-free” as used herein means “without X”, where X is a feature of the technology omitted in the “X-free” technology. For example, a “calcium-free” composition does not comprise calcium, a “mixing-free” method does not comprise a mixing step, etc. Although the terms “first”, “second”, “third”, etc. may be used herein to describe various steps, elements, compositions, components, regions, layers, and/or sections, these steps, elements, compositions, components, regions, layers, and/or sections should ASTO-41250.601 not be limited by these terms, unless otherwise indicated. These terms are used to distinguish one step, element, composition, component, region, layer, and/or section from another step, element, composition, component, region, layer, and/or section. Terms such as “first”, “second”, and other numerical terms when used herein do not imply a sequence or order unless clearly indicated by the context. Thus, a first step, element, composition, component, region, layer, or section discussed herein could be termed a second step, element, composition, component, region, layer, or section without departing from technology. As used herein, the word “presence” or “absence” (or, alternatively, “present” or “absent”) is used in a relative sense to describe the amount or level of a particular entity (e.g., component, action, element). For example, when an entity is said to be “present”, it means the level or amount of this entity is above a pre-determined threshold; conversely, when an entity is said to be “absent”, it means the level or amount of this entity is below a pre-determined threshold. The pre-determined threshold may be the threshold for detectability associated with the particular test used to detect the entity or any other threshold. When an entity is “detected” it is “present”; when an entity is “not detected” it is “absent”. As used herein, an “increase” or a “decrease” refers to a detectable (e.g., measured) positive or negative change, respectively, in the value of a variable relative to a previously measured value of the variable, relative to a pre-established value, and/or relative to a value of a standard control. An increase is a positive change preferably at least 10%, more preferably 50%, still more preferably 2-fold, even more preferably at least 5-fold, and most preferably at least 10-fold relative to the previously measured value of the variable, the pre-established value, and/or the value of a standard control. Similarly, a decrease is a negative change preferably at least 10%, more preferably 50%, still more preferably at least 80%, and most preferably at least 90% of the previously measured value of the variable, the pre-established value, and/or the value of a standard control. Other terms indicating quantitative changes or differences, such as “more” or “less,” are used herein in the same fashion as described above. As used herein, a “system” refers to a plurality of real and/or abstract components operating together for a common purpose. In some embodiments, a “system” is an integrated assemblage of hardware and/or software components. In some embodiments, each component of the system interacts with one or more other components and/or is related to one or more other components. In some embodiments, a system refers to a combination of components and software for controlling and directing ASTO-41250.601 methods. For example, a “system” or “subsystem” may comprise one or more of, or any combination of, the following: mechanical devices, hardware, components of hardware, circuits, circuitry, logic design, logical components, software, software modules, components of software or software modules, software procedures, software instructions, software routines, software objects, software functions, software classes, software programs, files containing software, etc., to perform a function of the system or subsystem. Thus, the methods and apparatus of the embodiments, or certain aspects or portions thereof, may take the form of program code (e.g., instructions) embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, flash memory, or any other machine-readable storage medium wherein, when the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the embodiments. In the case of program code execution on programmable computers, the computing device generally includes a processor, a storage medium readable by the processor (e.g., volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. One or more programs may implement or utilize the processes described in connection with the embodiments, e.g., through the use of an application programming interface (API), reusable controls, or the like. Such programs are preferably implemented in a high-level procedural or object-oriented programming language to communicate with a computer system. However, the program(s) can be implemented in assembly or machine language, if desired. In any case, the language may be a compiled or interpreted language, and combined with hardware implementations. As used herein, the term “computed tomography” is abbreviated “CT” and refers both to tomographic and non-tomographic radiography. For instance, the term “CT” refers to numerous forms of CT, including but not limited to x-ray CT, positron emission tomography (PET), single-photon emission computed tomography (SPECT), and photon counting computed tomography. Generally, computed tomography (CT) comprises use of an x-ray source and a detector that rotates around a patient and subsequent reconstruction of images into different planes. In embodiments of CT (e.g., devices, apparatuses, and methods provided for CT) described herein, the x-ray source is a static source and the patient is rotated with respect to the static source. Currents for x-rays used in CT describe the current flow from a cathode to an anode and are typically measured in milliamperes (mA). As used herein, the term “structured to [verb]” means that the identified element or assembly has a structure that is shaped, sized, disposed, coupled, and/or configured to ASTO-41250.601 perform the identified verb. For example, a member that is “structured to move” is movably coupled to another element and includes elements that cause the member to move or the member is otherwise configured to move in response to other elements or assemblies. As such, as used herein, “structured to [verb]” recites structure and not function. Further, as used herein, “structured to [verb]” means that the identified element or assembly is intended to, and is designed to, perform the identified verb. As used herein, the term “associated” means that the elements are part of the same assembly and/or operate together or act upon/with each other in some manner. For example, an automobile has four tires and four hub caps. While all the elements are coupled as part of the automobile, it is understood that each hubcap is “associated” with a specific tire. As used herein, the term “coupled” refers to two or more components that are secured, by any suitable means, together. Accordingly, in some embodiments, the statement that two or more parts or components are “coupled” shall mean that the parts are joined or operate together either directly or indirectly, e.g., through one or more intermediate parts or components. As used herein, “directly coupled” means that two elements are directly in contact with each other. As used herein, “fixedly coupled” or “fixed” means that two components are coupled so as to move as one while maintaining a constant orientation relative to each other. Accordingly, when two elements are coupled, all portions of those elements are coupled. A description, however, of a specific portion of a first element being coupled to a second element, e.g., an axle first end being coupled to a first wheel, means that the specific portion of the first element is disposed closer to the second element than the other portions thereof. Further, an object resting on another object held in place only by gravity is not “coupled” to the lower object unless the upper object is otherwise maintained substantially in place. That is, for example, a book on a table is not coupled thereto, but a book glued to a table is coupled thereto. As used herein, the term “removably coupled” or “temporarily coupled” means that one component is coupled with another component in an essentially temporary manner. That is, the two components are coupled in such a way that the joining or separation of the components is easy and does not damage the components. Accordingly, “removably coupled” components are readily uncoupled and recoupled without damage to the components. As used herein, the term “operatively coupled” means that a number of elements or assemblies, each of which is movable between a first position and a second position, or a first configuration and a second configuration, are coupled so that as the first element ASTO-41250.601 moves from one position/configuration to the other, the second element moves between positions/configurations as well. It is noted that a first element is “operatively coupled” to another without the opposite being true. As used herein, the term “rotatably coupled” refers to two or more components that are coupled in a manner such that at least one of the components is rotatable with respect to the other. As used herein, the term “translatably coupled” refers to two or more components that are coupled in a manner such that at least one of the components is translatable with respect to the other. As used herein, the term “temporarily disposed” means that a first element or assembly is resting on a second element or assembly in a manner that allows the first element/assembly to be moved without having to decouple or otherwise manipulate the first element. For example, a book simply resting on a table, e.g., the book is not glued or fastened to the table, is “temporarily disposed” on the table. As used herein, the term “correspond” indicates that two structural components are sized and shaped to be similar to each other and are coupled with a minimum amount of friction. Thus, an opening which “corresponds” to a member is sized slightly larger than the member so that the member may pass through the opening with a minimum amount of friction. This definition is modified if the two components are to fit “snugly” together. In that situation, the difference between the size of the components is even smaller whereby the amount of friction increases. If the element defining the opening and/or the component inserted into the opening are made from a deformable or compressible material, the opening may even be slightly smaller than the component being inserted into the opening. With regard to surfaces, shapes, and lines, two, or more, “corresponding” surfaces, shapes, or lines have generally the same size, shape, and contours. As used herein, a “path of travel” or “path,” when used in association with an element that moves, includes the space an element moves through when in motion. As such, any element that moves inherently has a “path of travel” or “path.” As used herein, the statement that two or more parts or components “engage” one another shall mean that the elements exert a force or bias against one another either directly or through one or more intermediate elements or components. Further, as used herein with regard to moving parts, a moving part may “engage” another element during the motion from one position to another and/or may “engage” another element once in the described position. Thus, it is understood that the statements, “when ASTO-41250.601 element A moves to element A first position, element A engages element B,” and “when element A is in element A first position, element A engages element B” are equivalent statements and mean that element A either engages element B while moving to element A first position and/or element A engages element B while in element A first position. As used herein, the term “operatively engage” means “engage and move.” That is, “operatively engage” when used in relation to a first component that is structured to move a movable or rotatable second component means that the first component applies a force sufficient to cause the second component to move. For example, a screwdriver is placed into contact with a screw. When no force is applied to the screwdriver, the screwdriver is merely “coupled” to the screw. If an axial force is applied to the screwdriver, the screwdriver is pressed against the screw and “engages” the screw. However, when a rotational force is applied to the screwdriver, the screwdriver “operatively engages” the screw and causes the screw to rotate. Further, with electronic components, “operatively engage” means that one component controls another component by a control signal or current. As used herein, the term “orthogonal” means perpendicular, essentially perpendicular, or substantially perpendicular. Two orthogonal components or elements (e.g., objects, lines, line segments, vectors, or axes) meet at an angle of 90° at their point of intersection. As used herein, the term “number” shall mean one or an integer greater than one (e.g., a plurality). As used herein, in the phrase “[x] moves between its first position and second position,” or, “[y] is structured to move [x] between its first position and second position,” “[x]” is the name of an element or assembly. Further, when [x] is an element or assembly that moves between a number of positions, the pronoun “its” means “[x],” i.e., the named element or assembly that precedes the pronoun “its.” As used herein, a “radial side/surface” for a circular or cylindrical body is a side/surface that extends about, or encircles, the center thereof or a height line passing through the center thereof. As used herein, an “axial side/surface” for a circular or cylindrical body is a side that extends in a plane extending generally perpendicular to a height line passing through the center. That is, generally, for a cylindrical soup can, the “radial side/surface” is the generally circular sidewall, and the “axial side(s)/surface(s)” are the top and bottom of the soup can. As used herein, a “diagnostic” test includes the detection or identification of a disease state or condition of a subject, determining the likelihood that a subject will ASTO-41250.601 contract a given disease or condition, determining the likelihood that a subject with a disease or condition will respond to therapy, determining the prognosis of a subject with a disease or condition (or its likely progression or regression), and determining the effect of a treatment on a subject with a disease or condition. For example, a diagnostic can be used for detecting the presence or likelihood of a subject having a cancer or the likelihood that such a subject will respond favorably to a compound (e.g., a pharmaceutical, e.g., a drug) or other treatment. As used herein, the term “condition” refers generally to a disease, malady, injury, event, or change in health status. As used herein, the term “treating” or “treatment” with respect to a condition refers to preventing the condition, slowing the onset or rate of development of the condition, reducing the risk of developing the condition, preventing, or delaying the development of symptoms associated with the condition, reducing or ending symptoms associated with the condition, generating a complete or partial regression of the condition, or some combination thereof. In some embodiments, “treatment” comprises exposing a patient or a portion thereof (e.g., a tissue, organ, body part, or other localized region of a patient body) to radiation (e.g., electromagnetic radiation, ionizing radiation). As used herein, the term “beam” refers to a stream of radiation (e.g., electromagnetic wave and/or or particle radiation). In some embodiments, the beam is produced by a source and is restricted to a small-solid angle. In some embodiments, the beam is collimated. In some embodiments, the beam is generally unidirectional. In some embodiments, the beam is divergent. As used herein, the term “patient” or “subject” refers to a mammalian animal that is identified and/or selected for imaging and/or treatment with radiation. Accordingly, in some embodiments, a patient or subject is contacted with a beam of radiation, e.g., a primary beam produced by a radiation source. In some embodiments, the patient or subject is a human. In some embodiments, the patient or subject is a veterinary or farm animal, a domestic animal or pet, or animal used for clinical research. In some embodiments, the subject or patient has cancer and/or the subject or patient has either been recognized as having or at risk of having cancer. As used herein, the term “treatment volume” or “imaging volume” refers to the volume (e.g., tissue) of a patient that is selected for imaging and/or treatment with radiation. For example, in some embodiments, the “treatment volume” or “imaging volume” comprises a tumor in a cancer patient. As used herein, the term “healthy tissue” refers to the volume (e.g., tissue) of a patient that is not and/or does not comprise ASTO-41250.601 the treatment volume. In some embodiments, the imaging volume is larger than the treatment volume and comprises the treatment volume. As used herein, the term “radiation source” or “source” refers to an apparatus that produces radiation (e.g., ionizing radiation) in the form of photons (e.g., described as particles or waves). In some embodiments, a radiation source is a linear accelerator (“linac”) that produces x-rays or electrons to treat a cancer patient by contacting a tumor with the x-ray or electron beam. In some embodiments, the source produces particles (e.g., photons, electrons, neutrons, hadrons, ions (e.g., protons, carbon ions, other heavy ions)). In some embodiments, the source produces electromagnetic waves (e.g., x-rays and gamma rays having a wavelength in the range of approximately 1 pm to approximately 1 nm). While it is understood that radiation can be described as having both wave-like and particle-like aspects, it is sometimes convenient to refer to radiation in terms of waves and sometimes convenient to refer to radiation in terms of particles. Accordingly, both descriptions are used throughout without limiting the technology and with an understanding that the laws of quantum mechanics provide that every particle or quantum entity is described as either a particle or a wave. As used herein, the term “static source” refers to a source that does not revolve around a patient during use of the source for imaging or therapy. In particular, a “static source” remains fixed with respect to an axis passing through the patient while the patient is being imaged or treated. While the patient may rotate around said axis to produce relative motion between the static source and rotating patient that is equivalent to the relative motion of a source revolving around a static patient, a static source does not move with reference to a third object, frame of reference (e.g., a treatment room in which a patient is positioned), or patient axis of rotation during imaging or treatment, while the patient is rotated with respect to said third object, said frame of reference (e.g., said treatment room in which said patient is positioned), or patient axis of rotation through the patient during imaging or treatment. A static source may be installed on a mobile platform and the static source may move with respect to the Earth and fixtures on the Earth as the mobile platform moves to transport the static source. Thus, the term “static source” may refer to a mobile “static source” provided that the mobile “static source” does not revolve around an axis of rotation through the patient during imaging or treatment of the patient. Further, the static source may translate and/or revolve around the patient to position the static source prior to imaging or treatment of the patient or after imaging or treatment of the patient. Thus, the term “static source” may refer to a source that translates or revolves around the patient in non-imaging and non- ASTO-41250.601 treatment use, e.g., to position the source relative to the patient when the patient is not being imaged and/or treated. In some embodiments, the “static source” is a photon source and thus is referred to as a “static photon source”. Embodiments of the technology described herein relate to locations in space, translations along axes, and/or rotations around axes. In some embodiments, a three- dimensional coordinate system is used that comprises an X axis, a Y axis, and a Z axis defined with respect to a patient support and/or a patient. See FIG.1A and FIG.1B. As shown in FIG.1A and FIG. 1B, embodiments use a coordinate system in which the X axis and Y axis together are in and/or define a horizontal plane and the Z axis is and/or defines a vertical axis. With respect to a patient positioned on the patient support (e.g., the patient positioning apparatus or patient support of a patient positioning system), the X axis is a left-right, horizontal, or frontal axis; the Y axis is an anteroposterior, dorsoventral, or sagittal axis; and the Z axis is a sagittal or longitudinal axis. The X axis and the Y axis together are in and/or define a horizontal, transverse, and/or axial plane. The Y axis and the Z axis together are in and/or define a sagittal or longitudinal plane. The X axis and the Z axis together are in and/or define a frontal or coronal plane. Accordingly, in some embodiments, descriptions of movements as “forward” or “backward” are movements along the Y axis; descriptions of movements as “left” or “right” are movements along the X axis; and descriptions of movements as “up” and “down” are movements along the Z axis. Furthermore, a rotation described as “roll” is rotation around the Y axis; a rotation described as “pitch” is rotation around the X axis; and a rotation described as “yaw” is rotation around the Z axis. Angles of rotations around the X, Y, and Z axes may be designated ψ (psi), φ (phi), and θ (theta), respectively. Thus, in some embodiments, technologies are described as having six degrees of freedom, e.g., translations along one or more of the X, Y, and/or Z axes; and/or rotations around one or more of the X, Y, and/or Z axes. Adjustments or changes in position by translation in the X, Y, and Z directions, respectively, may be denoted by ∆X, ∆Y, and ∆Z. Adjustments or changes in position by rotation around the X, Y, and Z axes, respectively, may be denoted by ∆ψ, ∆φ, and ∆θ. As used herein, the term “scene” refers to a reference image and/or set of reference images, information identifying the camera(s) that captured the reference image and/or set of reference images, and a region of interest setting for each camera that captured the reference image and/or set of reference images. The scene may be saved in a non-volatile storage medium and retrieved from the non-volatile storage medium to provide a retrieved scene. For instance, the scene may be saved when ASTO-41250.601 reference images are obtained and saved for a patient position setup, patient positioner configuration setup, or imaging setup. Each image or set of reference images may be saved with associated scene information identifying the camera that obtained the reference image and the region of interest of the camera when it obtained the reference image. Information used for identifying a camera may be any information that unambiguously identifies a camera and that is persistently or essentially persistently associated with a camera (e.g., at least between capture of a reference image with the camera and use of the reference image and the camera for alignment of a patient positioner and/or patient for imaging). In some embodiments, information used for identifying a camera is, e.g., a hardware address (e.g., an Ethernet address, a media access control (MAC) address, a static internet protocol (IP) address, or other identifier (e.g., “camera 1”, “camera 2”, “camera 3”, “camera 4”, “camera 5”, etc.) The region of interest of a camera may indicate a zoom factor of a lens of a camera or may identify a subset of camera sensor pixels (e.g., an area of the sensor array comprising a subset of pixels) shown or saved as an image (e.g., a “digital zoom”). Thus, in some embodiments, the region of interest setting associated with a camera describes an area of a camera sensor that provided the image pixels saved in the reference image. The region of interest setting may subsequently be used to select the same area of the camera sensor for providing live video to superimpose over the saved reference image. Thus, the scene information identifying the list of selected cameras and the region of interest of each camera identifies the cameras and the sensor pixels of each of the cameras to be used in obtaining and displaying live tracking images of the patient for alignment with the reference images obtained previously using the list of selected cameras and region of interest of each selected camera. FIG.1C is a schematic drawing showing an exemplary embodiment of a data record of a scene 500 comprising a number of images (image 511, image 512, image 513, image 514, and image 515 (e.g., at least three of which show views that are mutually orthogonal to each other)), a list of cameras 520 used to record the images (e.g., a list of unambiguous identifiers each associated one-to-one with a physical camera of the OGTS), and a list 530 of regions of interest of each camera of the list of cameras. The scene may be retrieved to provide a number of reference images. While FIG.1C shows a scene comprising five images and associated camera and ROI information for five cameras, the technology is not limited to a scene comprising five images. A scene may comprise 1, 2, 3, 4, or 5 images and associated information identifying 1, 2, 3, 4, or 5 cameras and 1, 2, 3, 4, or 5 ROIs. For example, particular embodiments relate to saving and retrieving scenes comprising three images ASTO-41250.601 taken from three mutually orthogonal cameras, a list of the three mutually orthogonal cameras, and a list comprising each ROI for each of the three mutually orthogonal cameras that was used to produce each of the three images. See FIG.1D. Further, while some embodiments described herein relate to OGTS systems comprising 5 cameras, embodiments are contemplated comprising more than 5 cameras and scenes comprising images, camera lists, and ROI information for more than 5 cameras. A sensor comprises a pixel area comprising a matrix of N × M elements called pixels or photosites, where N is the number of columns and M is the number of rows. Each pixel comprises a photosensitive region for accumulating incoming light energy in the form of electric charge and transistors that control operation of the pixel and provide the information from the pixel to a microprocessor and/or memory. As used herein, the term “region of interest”, which is abbreviated “ROI”, refers to a portion of a sensor that is selected for producing an image (e.g., for displaying in a window of the OGTS GUI). A sensor comprises an array of pixels (also known as photosites) and the region of interest defines a subarray (e.g., a rectangular subarray) of pixels of the sensor that is read by a processor to form an image. Selecting a subarray of pixels of a sensor may also be termed “windowing”. Selecting an ROI is advantageous to provide a zoomed-in image of an object (e.g., patient) or portion of an object (e.g., portion of a patient). Selecting an ROI is also advantageous to increase the frame rate of displaying and refreshing an image in a window of the OGTS GUI. Decreasing the number of pixels to read, transmit, and display by selecting an ROI increases the throughput of displaying images per unit time for a constant throughput of pixels per unit time. Performing analysis or calculations on a subset of pixels similarly increases the efficiency of performing analysis or calculations on images. For instance, a gigabit rate of data transmission provides a frame rate of approximately 2 frames per second for a 20-megapixel image. However, selecting an ROI of 4,000,000 pixels (e.g., described by approximately 100 megabits of data) increases the frame rate to approximately 10 frames per second. In some embodiments, the region of interest is defined by indicating a first pixel and a second pixel that define opposite corners of a rectangular subarray of pixels of the sensor. In some embodiments, the region of interest is defined by indicating a pixel defining one corner of a rectangular subarray of pixels of the sensor and a height and width of the subarray of pixels. In some embodiments, the region of interest is a shape (e.g., a regular or irregular shape) defined by indicating one or more pixels that ASTO-41250.601 define the perimeter of the region of interest. In some embodiments, a selection circuit is used to provide control signals to the selected pixels of the ROI. As used herein, the term “spatial resolution” of a camera refers to the ability of the camera to resolve and reproduce details of an object of which an image is captured. In other words, the term “spatial resolution” refers to minimum distance at which different points of an object are distinguished by a camera as individual points in an image of the object. Technologies for image acquisition, storage, manipulation, and display are described in, e.g., Brinkmann, The Art and Science of Digital Compositing (The Morgan Kaufmann Series in Computer Graphics, 2nd edition, Elsevier 2008), incorporated herein by reference. Optical guidance tracking system The technology described herein provides an optical guidance tracking system (OGTS). In some embodiments, the OGTS finds use with a patient positioning apparatus (e.g., as described in U.S. Pat. No.11,529,109 (PATIENT POSITIONING APPARATUS) incorporated herein by reference) and/or a patient positioning system comprising a patient support (e.g., as described in U.S. Pat. App. Ser. No.17/894,335 (PATIENT POSITIONING SYSTEM), incorporated herein by reference). In some embodiments, the technology provided herein finds use with a patient support that is a component or subsystem of a patient positioning apparatus or a patient positioning system as described in U.S. Pat. No.11,529,109 and/or U.S. Pat. App. Ser. No. 17/894,335. In some embodiments, the OGTS finds use with a medical imaging apparatus (e.g., a magnetic resonance imaging apparatus, a CT scanner (e.g., as described in U.S. Pat. App. Ser. No. 17/535,091, incorporated herein by reference), etc.) In some embodiments, the OGTS finds use with a radiation therapy apparatus (e.g., a radiation source (e.g., a stationary radiation source)) used for particle therapy (e.g., photon (e.g., x-ray) and/or hadron (e.g., proton) therapy). An exemplary patient positioning apparatus or patient support is shown in FIG.1A and FIG. 1B with an associated coordinate system. A medical imaging and treatment system comprising a patient positioning apparatus or patient support, a patient, and components of the OGTS is shown in FIG.2A to FIG.2D. For example, e.g., as shown in FIG. 2A to 2D, embodiments provide an OGTS 200 comprising a patient positioning system 220 (e.g., comprising a patient 900) and a camera system. In some embodiments, the OGTS finds use with a medical imaging device (e.g., a CT scanner 210). In some embodiments, the OGTS finds use with a ASTO-41250.601 radiation source (e.g., a static radiation source) to provide a therapeutic particle (e.g., photon, proton) beam to a patient. The OGTS has a main X axis 801 (FIG. 2A and FIG.2C), a main Y axis 802 (FIG. 2A and FIG.2C), and a main Z axis 803 (FIG.2B and FIG.2D). As further shown in FIG. 2A to FIG.2D, the camera system comprises an overhead camera 235 (FIG. 2B and FIG. 2D) and one or more peripheral cameras 231, 232, 233, and/or 234 (FIG. 2A and 2B). In some embodiments, any two, any three, or all four of the cameras 231, 232, 233, and/or 234 are provided in the systems described herein. In some embodiments, the camera system comprises an overhead camera 235 and at least two peripheral cameras 231, 232, 233, and/or 234 (FIG. 2A to FIG.2D). In some embodiments, the camera system comprises an overhead camera 235 and four peripheral cameras 231, 232, 233, and 234 (FIG. 2A to FIG. 2D). In some embodiments, the overhead camera 235 is placed on the main Z axis 803 of the patient positioning system 220 and/or of the CT scanner 210. In some embodiments, the camera system comprises two (e.g., at least two (e.g., two, three, or four)) peripheral cameras spaced at an interval of 90° (e.g., substantially and/or essentially 90°) around the periphery of a patient positioning system 220 (e.g., comprising a patient 900) and/or CT scanner 210 (FIG. 2A and FIG.2C). In some embodiments, the cameras are used to monitor and/or correct the position of an object, e.g., using the overhead camera to view and/or correct rotation errors of the object around the Z axis and to view and/or correct translation errors in X or Y directions; and using one or more peripheral cameras to view and/or correct translation errors in the X, Y, or Z directions and/or to view and/or correct rotation errors around the X, Y, or Z axes. An advantage of the technology is the ability to obtain a correction (e.g., a 4-degree correction) for the position of an object using three orthogonal camera views and previously saved reference images without requiring mathematical calculations to determine transformation in space. In some embodiments, the object is a patient. As described herein, the technology is not limited in the placement of the peripheral cameras provided that two of the peripheral cameras are orthogonal to each other and each of the peripheral cameras is orthogonal to the overhead camera. Embodiments comprising exemplary placements of the cameras are described below. Placement of the cameras may be adapted from these exemplary positions to accommodate components of an imaging system, radiotherapy system, or patient positioning with which the OGTS is used. ASTO-41250.601 For example, e.g., as shown in FIG. 2A and FIG.2C, embodiments provide a camera system comprising two, three, or four peripheral cameras 231, 232, 233, and/or 234 spaced at an interval of 90° (e.g., substantially and/or essentially 90°) around the periphery of the patient positioning system 220 (e.g., comprising a patient 900) and/or CT scanner 210. In some embodiments, two, three, or four peripheral cameras 231, 232, 233, and/or 234 are placed on the major axes X and/or Y of the patient positioning system 220 (FIG. 2C and FIG. 2D). In FIG.2C, one or two peripheral cameras 232 and/or 234 are hidden from view by components of the CT scanner 210 (e.g., as shown by the dotted rectangles in FIG.2C). One or two peripheral cameras 231 and/or 233 are provided in front of and/or behind the patient positioning system 220. In FIG. 2D, the rear peripheral camera 233, if present, is occluded from view by the patient positioning system 220 and/or by the patient 900. Using this arrangement of cameras shown in FIG. 2C and FIG.2D, embodiments provide that an object of interest (e.g., patient 900) may face one of the cameras. Thus, embodiments provide for detecting transverse movements of the object using the camera toward which the object faces and detecting longitudinal movements in other cameras (e.g., two other cameras) that are orthogonal to the camera toward which the object is facing. Further, as shown in FIG.2A and 2B, embodiments provide that two, three, or four of the peripheral cameras 231, 232, 233, and/or 234 are placed at positions that are displaced from the main X axis 801 or main Y axis 802 of the patient positioning system 220 and/or CT scanner 210 by a rotation of 45° (e.g., substantially and/or essentially 45°) in the XY plane around the main Z axis 803. The overhead camera 235 (FIG.2B and FIG.2D) and the one or more peripheral cameras 231, 232, 233, and/or 234 are placed to image the patient positioning system 220 and a patient 900 when positioned on the patient positioning system 220. Thus, the overhead camera 235 (FIG. 2B and FIG.2D) and the one or more peripheral cameras 231, 232, 233, and/or 234 provide a number of views of the patient positioning system 220 and a patient 900 when positioned on the patient positioning system 220. The overhead camera 235 is placed to provide an overhead view of the patient positioning system 220 and/or a patient 900, e.g., through a central opening (e.g., bore) of a scanner ring of the CT scanner 220. FIG.2A and FIG.2C show a view of the patient positioning system 220 and a patient 900 as viewed by the overhead camera 235 through the central opening of the scanner ring. In some embodiments, a pair of adjacent peripheral cameras (e.g., 231 and 232, 232 and 233, 233 and 234, or 234 and 231) provides two ASTO-41250.601 views (e.g., orthogonal views) of the patient positioning system 120 and/or a patient 900 that are used to construct a three-dimensional image of the patient positioning system 220 and/or a patient 900 (e.g., using triangulation). See, e.g., Hartley and Zisserman, Multiple View Geometry in Computer Vision (Cambridge University Press (New York), 2nd edition, 2003, incorporated herein by reference. In some embodiments, three or four cameras 231, 232, 233, and/or 234 provide three or four views of the patient positioning system 220 and/or a patient 900 that are used with the view provided by overhead camera 235 to construct a three-dimensional image (e.g., a surface rendering) of the patient positioning system 220 and/or a patient 900. In some embodiments, the overhead camera 235 and two adjacent peripheral cameras 231, 232, 233, and/or 234 are positioned in space so that the main axes of the fields of view for the three cameras (e.g., overhead camera 235, peripheral camera 231, and peripheral camera 232; overhead camera 235, peripheral camera 232, and peripheral camera 233; overhead camera 235, peripheral camera 233, and peripheral camera 234; or overhead camera 235, peripheral camera 234, and peripheral camera 231) are mutually orthogonal in three-dimensional space. FIG.2E. In some particular embodiments, the overhead camera 235 is positioned in space so that the main axis of the field of view for overhead camera 235 is perpendicular to each main axis of each field of view of each peripheral camera 231, 232, 233, and 234 (e.g., the main axis of the field of view of overhead camera 235 is perpendicular to the main axis of the field of view of peripheral camera 231, the main axis of the field of view of overhead camera 235 is perpendicular to the main axis of the field of view of peripheral camera 232, the main axis of the field of view of overhead camera 235 is perpendicular to the main axis of the field of view of peripheral camera 233, and the main axis of the field of view of overhead camera 235 is perpendicular to the main axis of the field of view of peripheral camera 234); and each pair of adjacent peripheral cameras (e.g., 231 and 232, 232 and 233, 233 and 234, and 234 and 231) is positioned in space so that the main axes of the fields of view for each pair of adjacent peripheral cameras (e.g., 231 and 232, 232 and 233, 233 and 234, and 234 and 231) are perpendicular. FIG.2F. In some embodiments, the OGTS comprises five cameras that are arranged as in FIG. 2F with an overhead camera 235 and four peripheral cameras 231, 232, 233, and 234 that are spaced at 90° intervals and each being at 90° from the overhead camera 235. The arrangement shown in FIG. 2F may be modified to accommodate components of the imaging and medical systems that may impede installation of the full five-camera ASTO-41250.601 system. Accordingly, the technology comprises OGTS systems comprising an overhead camera 235 and 2, 3, or 4 peripheral cameras 231, 232, 233, and/or 234 with at least three cameras being mutually orthogonal. The two-dimensional images provided from three orthogonal views provide for aligning the patient in the XY, XZ, and YZ planes individually. FIG. 3. In some embodiments, the cameras 231, 232, 233, 234, and/or 235 are typically placed approximately 2.5 to 4.0 meters (e.g., 2.5, 2.6, 2.7, 2.8, 2.9, 3.0, 3.1, 3.2, 3.3, 3.4, 3.5, 3.6, 3.7, 3.8, 3.9, or 4.0 meters) from the patient positioning system 220 and/or patient 900. In some embodiments, the cameras 231, 232, 233, 234, and/or 235 are placed approximately 2.5 to 4.0 meters (e.g., 2.5, 2.6, 2.7, 2.8, 2.9, 3.0, 3.1, 3.2, 3.3, 3.4, 3.5, 3.6, 3.7, 3.8, 3.9, or 4.0 meters) from the isocenter of a medical imaging and/or radiation therapy system comprising the patient positioning system 220. The camera system comprises a number of cameras. In some embodiments, the cameras have a spatial resolution of 1.0 mm or better (e.g., a spatial resolution of 1.00 mm, 0.95, 0.90, 0.85, 0.80, 0.75, 0.70, 0.65, 0.60, 0.55, 0.50, 0.45, 0.40, 0.35, 0.30, 0.25, 0.20, 0.15, or 0.10 mm), wherein a better or higher resolution refers to a lower spatial resolution. In some embodiments, the cameras have a spatial resolution of 0.5 mm or better (e.g., a spatial resolution of 0.50, 0.49, 0.48, 0.47, 0.46, 0.45, 0.44, 0.43, 0.42, 0.41, 0.40, 0.39, 0.38, 0.37, 0.36, 0.35, 0.34, 0.33, 0.32, 0.31, 0.30, 0.29, 0.28, 0.27, 0.26, 0.25, 0.24, 0.23, 0.22, 0.21, 0.20, 0.19, 0.18, 0.17, 0.16, 0.15, 0.14, 0.13, 0.12, 0.11, or 0.10 mm). In some embodiments, the cameras have a spatial resolution of 0.5 mm or better (e.g., a spatial resolution of 0.50, 0.49, 0.48, 0.47, 0.46, 0.45, 0.44, 0.43, 0.42, 0.41, 0.40, 0.39, 0.38, 0.37, 0.36, 0.35, 0.34, 0.33, 0.32, 0.31, 0.30, 0.29, 0.28, 0.27, 0.26, 0.25, 0.24, 0.23, 0.22, 0.21, 0.20, 0.19, 0.18, 0.17, 0.16, 0.15, 0.14, 0.13, 0.12, 0.11, or 0.10 mm) in close-up zoom mode. In some embodiments, the cameras have a refresh rate of at least 5 Hz (e.g., at least 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, or 30 Hz). However, while a high refresh rate is not necessary for some embodiments of the technology, the technology is not limited to cameras having refresh rates of approximately 5 to 30 Hz and encompasses use of cameras having higher refresh rates, e.g., 60 Hz, 120 Hz, or 240 Hz or more. In some embodiments, one or more cameras comprises an accelerometer and/or other component (e.g., gyroscope, magnetometer) that identifies the orientation of the camera in space and/or its location. In some embodiments, quaternion orientation solutions are provided using accelerometer and gyroscope data to identify the ASTO-41250.601 orientation and/or location of a camera. In some embodiments, the OGTS comprises a beacon placed at a static and known location that is used to determine the orientation and/or location of the cameras. In some embodiments, the cameras have a color sensor comprising approximately 20 megapixels. For instance, in some embodiments, cameras have a sensor array of 5,496 pixels × 3,672 pixels and thus have a sensor comprising 20,181,312 pixels (also known as “photosites”) or approximately 20.2 megapixels. The technology is not limited to cameras comprising approximately 20 megapixels. In some embodiments, cameras comprise 5 to 10 megapixels (e.g., 5.0, 5.1, 5.2, 5.3, 5.4, 5.5, 5.6, 5.7, 5.8, 5.9, 6.0, 6.1, 6.2, 6.3, 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, 7.7, 7.8, 7.9, 8.0, 8.1, 8.2, 8.3, 8.4, 8.5, 8.6, 8.7, 8.8, 8.9, 9.0, 9.1, 9.2, 9.3, 9.4, 9.5, 9.6, 9.7, 9.8, 9.9, or 10.0 megapixels). In some embodiments, cameras comprise more than 20 megapixels. For instance, embodiments comprise use of cameras comprising 20 or more, 30 or more, 40 or more, 50 or more, or 60 or more megapixels. Each sensor pixel transmits an electrical signal corresponding to a number of photons contacting the sensor pixel. The electrical signals are converted to luminance values for all of the sensor pixels. The luminance value of each sensor pixel provides the signal used to produce an image pixel in the resulting image. Thus, a camera having a sensor comprising 5,496 sensor pixels × 3,672 sensor pixels produces an image comprising 5,496 image pixels × 3,672 image pixels. Embodiments comprise determining or otherwise providing a relationship between a real world distance and the number of pixels (e.g., sensor pixels or image pixels) corresponding to the real world distance. For instance, a camera having a 1.0- meter horizontal field of view captured across a sensor comprising 3,672 columns of pixels has a relationship of 0.2723 mm per pixel in the horizontal direction. The same camera having a 1.5-meter vertical field of view captured across a sensor comprising 5,496 rows of pixels has a relationship of 0.2729 mm per pixel in the vertical direction. Accordingly, in some embodiments, the relationship between real world distance and number of pixels is approximately 0.27 mm (e.g., approximately 0.3 mm) per pixel. That is, a first image that is offset by one pixel with respect to a second image indicates a translation of an object in the real world of approximately 0.3 mm. The cameras provide images comprising red, green, and blue channels (e.g., RGB images). In some embodiments, the cameras are high-resolution gigabit Ethernet (GigE) cameras. Accordingly, in some embodiments, cameras transmit Ethernet frames at a rate of at least one gigabit per second. In some embodiments, the cameras are connected to a wireless communications module or a wired communications medium, e.g., optical ASTO-41250.601 fiber (e.g., 1000BASE-X), twisted pair cable (e.g., 1000BASE-T), or shielded balanced copper cable (e.g., 1000BASE-CX). In some embodiments, the cameras receive electric power over the same cables used for data transmission (e.g., twisted-pair Ethernet cabling), e.g., the cameras receive electrical power over the Ethernet cables (e.g., power over Ethernet (PoE)); and, in some embodiments, the cameras are powered by a separate external power supply. In some embodiments, cameras comprise a number of output interfaces (e.g., a number of GigE output interfaces) with each output interface (e.g., each GigE output interface) providing output for each of a number of image data channels. For example, in some embodiments, cameras output image data in a red channel (e.g., comprising image data corresponding to wavelengths of approximately 550–750 nm), a green channel (e.g., comprising image data corresponding to wavelengths of approximately 450–650 nm), and a blue channel (e.g., comprising image data corresponding to wavelengths of approximately 350–550 nm), and the cameras have an output interface (e.g., a GigE output interface) for each of the red, green, and blue channels. That is, in some embodiments, cameras have a red image channel output (e.g., a red image channel GigE output), a green image channel output (e.g., a green image channel GigE output), and a blue image channel output (e.g., a blue image channel GigE output). In some embodiments, a pixel comprises three elements to provide each of the red, green, and blue signals for the light contacting the pixel. Each color element is digitized to provide a range of intensity. In some embodiments, the intensity of each color is described using 8 bits (1 byte) to create a range of 256 intensity values for each color. Thus, according to this example, each pixel provides 3 bytes of data or 24 bits of data. Consequently, one frame of a 20-megapixel image is described by 20 megapixels × 24 bits/pixel = 480 megabits of data. A gigabit rate of data transmission thus provides approximately a frame rate of 2 frames per second for a full 20-megapixel image. In some embodiments, the cameras are mirrorless cameras complying to the MICRO FOUR THIRDS (MFT) SYSTEM. In some embodiments, the cameras comprise a MFT lens. In some embodiments, the cameras comprise a motorized zoom/focus/aperture MFT lens, e.g., to provide control of the field of view, to provide sufficient imaging detail, and/or to image all or a substantial portion of a patient and/or a patient positioning system. In some embodiments, cameras are mounted to a pan-tilt component comprising a mechanism to pan and/or tilt a camera. In some embodiments, the OGTS comprises a virtual pan-tilt module that crops images accordingly and thus replaces a mechanical pan-tilt mechanism. In some embodiments, the OGTS comprises a ASTO-41250.601 thermal (e.g., infrared) camera. In some embodiments, a thermal camera is used to monitor and/or detect a breathing cycle of a patient. In some embodiments, the OGTS comprises a computer. An exemplary computer that finds use in embodiments of the technology described herein is an industrial computer comprising an INTEL CORE I9 central processing unit, 32 gigabytes of random-access memory, one or more non-volatile memory express (NVMe) solid state drives (SSD) for storage of data and instructions, and a graphics processing unit (e.g., an NVIDIA GPU). In some embodiments, the computer communicates through an application programming interface with the cameras, e.g., using a generic programming interface. In some embodiments, the generic programming interface complies with the GENICAM standard (e.g., GENICAM Version 2.1.1, incorporated herein by reference). Systems In some embodiments, the technology relates to systems. In some embodiments, systems comprise the OGTS as described herein and a computer, e.g., as described above and in the examples. In some embodiments, systems comprise the OGTS, a computer, and a patient positioning system comprising a patient support or patient positioning apparatus. In some embodiments, systems comprise an OGTS as described herein and software components and/or hardware components structured to rotate and/or to translate a patient positioning system, patient positioning apparatus, and/or a patient support or configurable component thereof. For example, in some embodiments, systems comprise motors engaged with a patient positioning system, patient positioning apparatus, and/or a patient support or configurable component thereof, a power supply, and software configured to supply power to the motors to translate and/or rotate the patient positioning system, patient positioning apparatus, and/or a patient support or configurable component thereof. In some embodiments, systems comprise software components structured to perform a method as described herein, e.g., to determine an adjustment (e.g., one or more of a ∆X, ∆Y, ∆Z, ∆ψ, ∆φ, and/or ∆θ) and/or to move (e.g., translate and/or rotate) a patient positioning system, a patient positioning apparatus, a patient support and/or a configurable component thereof. In some embodiments, systems comprise an OGTS as described herein and a controller. In some embodiments, the OGTS communicates with the controller. In some embodiments, the controller activates the OGTS (e.g., activates one or more cameras of the OGTS) and collects one or more image(s) from the one or more cameras. In some embodiments, the controller controls the region of interest displayed by one or more ASTO-41250.601 cameras. In some embodiments, the controller communicates with a graphic display terminal for displaying live images from one or more cameras. In some embodiments, the controller communicates with a graphic display terminal for displaying previously saved reference images (e.g., from a scene). In some embodiments, the controller communicates with user input devices such as a keyboard for receiving instructions from a user. In some embodiments, the controller has a general computer architecture including one or more processors communicating with a memory for the storage of non- transient control programs. In some embodiments, the controller communicates with a memory to store images from one or more cameras, to retrieve reference images previously obtained by one or more cameras, to store a scene identifying one or more cameras and the region of interest setting of the one or more cameras, and/or to retrieve a scene identifying one or more cameras and the region of interest setting of the one or more cameras. In some embodiments, systems comprise software configured to perform image recording, image analysis, image storage, image manipulation, image registration, and/or image comparison methods. In some embodiments, systems comprise hardware components such as microprocessors, graphics processors, and/or communications buses configured to communicate, record, analyze, store, manipulate, and/or compare images. Furthermore, in some embodiments, systems comprise a graphical display comprising a graphical user interface (GUI). In some embodiments, the GUI comprises a viewing element that displays an image from a camera. In some embodiments, the GUI comprises a number of viewing elements that each displays an image from a camera. See FIG. 6 and FIG.7. The GUI comprises a number of control elements. For example, the GUI may comprise a control element that is used to select cameras (e.g., one, two, three, or four of the peripheral cameras) for providing images in the viewing elements of the GUI. Thus, in some embodiments, a user can select one or more cameras (e.g., 1, 2, or 3 of the 5 cameras in embodiments comprising 5 cameras) to provide useful camera views of the patient based on the orientation of the patient and the cameras providing the best views of the patient. In some embodiments, the GUI comprises a zoom control element for setting the region of interest of a camera providing images in the viewing elements of the GUI. Accordingly, the GUI allows a user to select a region of interest in a view provided by an individual camera to obtain more precise information for the object in a selected region. Furthermore, in some embodiments, the system controls the data sent by a camera so ASTO-41250.601 that only data collected within the selected region of interest is sent to a computer for display on the GUI, thus reducing the amount of data transferred from the camera to the computer and providing increased frame rates (e.g., greater than 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30 Hz, or more). The GUI may comprise a capture button that may be clicked by a user to cause a camera to provide an image and to record the image from the camera or to record images from a number of cameras simultaneously. The GUI may comprise an image retrieval control element to select and retrieve a reference image and/or a set of reference images. The GUI may comprise a scene selection control element to select and retrieve a saved scene comprising a reference image and/or set of reference images, information identifying the cameras that captured the reference image and/or set of reference images, and the region of interest setting of each camera that captured the reference image and/or set of reference images. See FIG.8. In some embodiments, the GUI provides a button to begin tracking mode. In tracking mode, a set of reference images is displayed in viewing elements on the display. Live images provided by the same cameras using the same region of interest settings that were used to obtain the reference images are superimposed over the reference images within the appropriate viewing elements. A user may interact with the viewing elements using a pointing device (e.g., mouse, track ball, track pad, finger or stylus and touch screen, eye tracking, etc.) to manipulate a cursor displayed on the display. Interacting with the viewing elements may comprise translating and/or rotating a reference image to match the position of a live tracking image superimposed on the reference image. In some embodiments, the green and blue RGB components of the reference images are displayed in the viewing elements on the display and the red RGB channel of each of the associated live tracking images is superimposed over the green and blue RGB components of the reference images in the viewing elements on the display. In some embodiments, the red and blue RGB components of the reference images are displayed in the viewing elements on the display and the green RGB channel of each of the associated live tracking images is superimposed over the red and blue RGB components of the reference images in the viewing elements on the display. In some embodiments, the green and red RGB components of the reference images are displayed in the viewing elements on the display and the blue RGB channel of each of the associated live tracking images is superimposed over the green and red RGB components of the reference images in the viewing elements on the display. FIG.9. A ASTO-41250.601 user may manipulate (e.g., transform and/or rotate) a reference image provided in a viewing element on the display to align the reference and live tracking images. An increase in alignment accuracy is indicated by a decrease in the amount of unaligned red and green/blue (or green and red/blue or blue and green/red) portions in the viewing elements. In some embodiments, a user may interact with the GUI to draw a reference line or reference mark on a reference image and/or a live tracking image, e.g., to place a reference line or mark on a viewing element on the display. See, e.g., FIG.9. For example, a reference line can be provided on the GUI to intersect with a reference point in a treatment room, e.g., the treatment isocenter, as viewed in a viewing element on the display. Thus, in some embodiments, a reference line serves a similar function as a laser line used in a treatment room. Thus, in some embodiments, a reference line may provide a virtual laser line. After setting the reference image or reference mark on one or more viewing elements on the display, the reference image or reference mark provides a fixed reference point that does not move with the images provided on the viewing element. Accordingly, moving the object until a point or marker on the object as viewed in the live tracking image intersects with the fixed reference point on two cameras provides a technology for aligning the object with a fixed reference point in the room. Methods In some embodiments, the technology relates to embodiments of methods. In some embodiments, the technology provides methods for imaging a patient. In some embodiments, the technology provides methods for treating a patient. In some embodiments, a method for treating a patient comprises a method for imaging a patient. In some embodiments, methods for treating a patient comprise a pre-treatment patient immobilization and imaging phase 1000 (FIG.10A–FIG. 10D) and a treatment phase 2000 (FIG.11A–FIG.11D). For example, e.g., as shown in FIG. 10A, the technology provides a method for pre-treatment patient immobilization and imaging 1000 (also known as a “simulation” phase). The method for pre-treatment patient immobilization and imaging 1000 comprises starting 1100 an optical guidance tracking system (OGTS) session. In some embodiments, starting 1100 an OGTS session comprises moving a patient positioning apparatus or a patient support (e.g., a patient support that is a component of a patient positioning system) to a patient loading position. Further, the method for pre-treatment patient immobilization and imaging 1000 comprises determining 1200 if initial patient ASTO-41250.601 immobilization is needed. Initial patient immobilization may be needed, e.g., for a new patient, a new treatment of a patient, a treatment of a new region of a patient, or for a treatment of a patient in a new patient posture. As shown in FIG.10A, if an initial patient immobilization is needed (YES), then the method for pre-treatment patient immobilization and imaging 1000 comprises performing an initial patient immobilization method 1300 (FIG. 10B). If an initial patient immobilization is not needed (NO), then the method for pre-treatment patient immobilization and imaging 1000 comprises performing a subsequent patient immobilization method 1400 (FIG. 10C). After performing the initial patient immobilization method 1300 or the subsequent patient immobilization method 1400, the pre-treatment patient immobilization method comprises obtaining 1500 a CT scan of the patient (FIG.10D). FIG. 10B shows an embodiment of a method for initial patient immobilization 1300. As shown in FIG.10B, embodiments of a method for initial patient immobilization 1300 comprise an initialization and tracking initiation step 1310. The initialization and tracking initiation step 1310 comprises setting up the cameras and preparing the OGTS for imaging a patient position setup and/or a configuration setup as described below. As shown in FIG.10B, the initialization and tracking initiation step 1310 comprises selecting 1311 a number of cameras for use in imaging a patient position setup and/or a configuration setup. In some embodiments, at least three cameras (e.g., an overhead camera and two peripheral cameras that are mutually orthogonal with each other) are selected for use in imaging a patient position setup and/or a configuration setup. In some embodiments, four or five cameras are selected (e.g., comprising at least an overhead camera and two peripheral cameras that are mutually orthogonal with each other) for use in imaging a patient position setup and/or a configuration setup. As further shown in FIG.10B, embodiments of the initialization and tracking initiation step 1310 comprise resetting 1312 regions of interest. In some embodiments, embodiments of the initialization and tracking initiation step 1310 comprise resetting reference lines or reference marks. Further, the initialization and tracking initiation step 1310 comprises starting 1313 tracking by acquiring video provided by the selected cameras. Tracking 1313 may also include displaying a live image provided by each of the selected cameras on a display in a separate window so that each live image is viewable by a user. The live images show real-time video of the patient position setup and/or configuration setup from multiple, orthogonal views (e.g., from the top and at least two peripheral views) provided by the selected cameras. In some embodiments, embodiments of methods ASTO-41250.601 comprise drawing a reference line or reference mark on a live tracking image, e.g., to place a reference line or mark on a viewing element on a display. In some embodiments, the reference line or reference mark intersects with a reference point in a treatment room, e.g., the treatment isocenter, as viewed in a viewing element on the display. Embodiments of methods for initial patient immobilization 1300 further comprise loading 1320 a patient (e.g., placing a patient) on the patient positioning apparatus (PPA) or patient support. Embodiments of methods for initial patient immobilization 1300 further comprise determining 1330 a patient posture and configuring the patient positioning apparatus (PPA) or patient support to support the patient posture, e.g., by supporting the patient body to be in a comfortable and stable position appropriate for treatment. Determining 1330 the patient posture may comprise manipulating, guiding, and/or applying a force (e.g., by a technician or by the patient positioning apparatus or patient support) to the patient to provide the patient in a posture appropriate for treatment. Configuring the PPA or patient support may comprise moving (e.g., translating and/or rotating) the entire PPA or patient support or may comprise moving (e.g., translating and/or rotating) one or more components of the PPA or patient support (e.g., one or more of a back rest, a seat pan, a shin rest, an arm rest, a head rest, and/or foot braces or heel stops). In some embodiments, methods comprise viewing the patient posture and/or position of the PPA or patient support on a live tracking image and adjusting the patient posture and/or position of the PPA or patient support using a reference line or reference mark provided on the live tracking image that marks a reference point in a treatment room, e.g., the treatment isocenter. Next, embodiments of methods for initial patient immobilization 1300 comprise determining 1340 if the patient is ready to continue. Determining 1340 if the patient is ready to continue may comprise asking the patient if she is comfortable, determining that the patient is in a stable posture, determining that the patient is in a posture appropriate for treatment, and/or otherwise confirming that it is appropriate to proceed to subsequent steps of the method for initial patient immobilization 1300. If the patient is not ready to continue (NO), then the step of determining 1330 the patient posture and configuring the patient positioning apparatus (PPA) or patient support to support the patient posture and the step of determining 1340 if the patient is ready to continue are repeated. If the patient is ready to continue (YES), then the method proceeds to saving 1350 the patient position setup scene. ASTO-41250.601 Saving 1350 the patient position setup scene primarily records the posture (e.g., patient position) of the patient in a position appropriate for subsequent treatment. Saving 1350 the patient position setup scene comprises saving a list of the selected cameras providing images of the patient position, saving each of the images of the patient position provided by each of the selected cameras, and saving the region of interest that was saved as an image for each of the selected cameras during image acquisition. In some embodiments, saving 1350 the patient position setup scene comprises optionally saving patient identifying information, saving the date and time when the patient position setup scene is saved, saving the type of treatment to be performed on the patient in a subsequent treatment phase, saving information identifying the OGTS user performing the method for initial patient immobilization 1300, etc. Next, embodiments of methods for initial patient immobilization 1300 comprise selecting 1360 a number of cameras for use in imaging the patient and PPA or patient support in the imaging position. In some embodiments, at least three cameras (e.g., an overhead camera and two peripheral cameras that are mutually orthogonal with each other) are selected for use in imaging the patient and PPA or patient support in the imaging position. In some embodiments, four or five cameras are selected (e.g., comprising at least an overhead camera and two peripheral cameras that are mutually orthogonal with each other) for use in imaging the patient and PPA or patient support in the imaging position. Embodiments of methods for initial patient immobilization 1300 comprise moving 1370 the PPA or patient support to the imaging position. Moving 1370 the PPA or patient support may comprise translating the PPA or patient support along the X, Y, or Z axis and/or rotating the PPA or patient support around one or more of the X, Y, or Z axis. In some embodiments, selecting 1360 a number of cameras for use in imaging the patient and/or PPA or patient support in the imaging position is performed prior to moving 1370 the PPA or patient support to the imaging position. In some embodiments, moving 1370 the PPA or patient support to the imaging position is performed prior to selecting 1360 a number of cameras for use in imaging the patient and/or PPA or patient support in the imaging position. In some embodiments, methods comprise drawing a reference line or reference mark on a live tracking image that marks a reference point in the room, e.g., the treatment isocenter or imaging position. In some embodiments, methods comprise viewing the PPA or patient support on a live tracking image and ASTO-41250.601 adjusting the PPA or patient support using the reference line or reference mark provided on the live tracking image that marks the reference point. Next, embodiments of methods for initial patient immobilization 1300 comprise determining 1380 if the patient is ready for imaging. Determining 1380 if the patient is ready for imaging may comprise determining that the PPA or patient support and the patient are in the correct position for imaging. In some embodiments, determining 1380 if the patient is ready for imaging may optionally comprise informing the patient that she is positioned for imaging, asking the patient if she is comfortable, determining that the patient is in a stable posture, determining that the patient is in a posture appropriate for treatment, and/or otherwise confirming that it is appropriate to proceed to subsequent steps of the method for initial patient immobilization 1300. If the patient is not ready for imaging (NO), then the step of moving 1370 the PPA or patient support to the imaging position and the step of determining 1380 if the patient is ready for imaging may be repeated. If the patient is ready for imaging (YES), then the method proceeds to saving 1390 the imaging position setup scene. Saving 1390 the imaging position setup scene comprises saving a list of the selected cameras providing images of the imaging position, saving each of the images of the imaging position provided by each of the selected cameras, and saving the region of interest of each of the selected cameras. In some embodiments, saving 1390 the imaging position setup scene comprises optionally saving patient identifying information, saving the date and time when the imaging position setup scene is saved, saving the type of treatment to be performed on the patient in a subsequent treatment phase, saving information identifying the OGTS user performing the method for initial patient immobilization 1300, etc. After saving 1390 the imaging position setup scene, the method comprises obtaining 1500 a CT scan (FIG.10D), e.g., as described below. FIG. 10C shows an embodiment of a method for subsequent patient immobilization 1400 (e.g., to reproduce a PPA or patient support configuration, patient position, and/or imaging position previously saved in a patient support configuration setup scene, patient position setup scene, and/or imaging position setup scene). As shown in FIG.10C, embodiments of a method for subsequent patient immobilization 1400 comprise retrieving 1411 a saved configuration setup scene to provide a retrieved configuration setup scene. In some embodiments, a saved configuration setup scene was previously saved during performing a method for obtaining 1500 a CT scan of a patient after performing a method for initial patient immobilization 1300 (e.g., comprising saving 1560 a configuration setup scene), e.g., as described below. The retrieved ASTO-41250.601 configuration setup scene comprises saved images of the PPA or patient support configuration (e.g., images showing views of the PPA or patient support from at least three orthogonal directions), a list of cameras that provided the saved images of the PPA or patient support configuration, and the region of interest of each of the selected cameras that provided the images during image acquisition. In some embodiments, methods further comprise displaying each of the saved images showing views of the PPA or patient support from at least three orthogonal directions on a display in a separate window so that each orthogonal view is viewable by a user in each of the separate windows. Each saved and displayed image shows a view of the configuration setup provided by the selected cameras listed in the saved configuration setup scene. The images of the retrieved configuration setup scene (e.g., displayed on the display) provide reference images for correctly configuring the PPA patient support. Next, in some embodiments, methods comprise configuring 1412 a patient positioning apparatus or patient support. In some embodiments, the retrieved configuration setup scene comprises information describing the configuration of the PPA or patient support for use in configuring 1412 the patient positioning apparatus or patient support. For instance, in some embodiments, the retrieved configuration setup scene comprises information describing the location of the PPA or patient support and/or the position of one or more components of the PPA or patient support (e.g., the position of one or more of a back rest, a seat pan, a shin rest, an arm rest, a head rest, and/or foot braces or heel stops). In some embodiments, configuring 1412 a patient positioning apparatus or patient support comprises configuring the PPA or patient support according to a standard preset describing the approximate location of the PPA or patient support and/or the approximate position of one or more components of the PPA or patient support (e.g., the position of one or more of a back rest, a seat pan, a shin rest, an arm rest, a head rest, and/or foot braces or heel stops). Next, embodiments of methods for subsequent patient immobilization 1400 provided herein comprise determining 1420 if the PPA or patient support is configured correctly. In some embodiments, determining 1420 if the PPA or patient support is configured correctly comprises using the images of the retrieved configuration setup scene (e.g., displayed on the display) as reference images and live video of the PPA or patient support for correctly configuring the PPA or patient support. In particular, the information saved in the configuration setup scene providing the list of cameras that provided the saved images of the PPA or patient support configuration and the region of interest used for each of the selected cameras during image acquisition is used to select ASTO-41250.601 the same cameras to provide live video of the PPA or patient support and to set the region of interest for each of the selected cameras to the same region of interest that was saved for each of the cameras in the saved configuration setup scene. Accordingly, the live video provided by the selected cameras shows the same views (e.g., same orthogonal views) of the same regions of interest of the PPA or patient support that were saved in the saved configuration setup scene and that are provided in the retrieved configuration setup scene. A user interacts with the GUI of the OGTS software to initiate tracking, which acquires live video from each of the selected cameras. In some embodiments, methods comprise drawing a reference line or reference mark on a live tracking image that marks a reference point in the room, e.g., the treatment isocenter or imaging position. In some embodiments, methods comprise viewing the PPA or patient support on a live tracking image and adjusting the PPA or patient support using the reference line or reference mark provided on the live tracking image that marks the reference point. The live video from each of the selected cameras showing the PPA or patient support is superimposed on the display over the associated reference image of the PPA or patient support previously saved by the same camera. When in tracking mode, the OGTS software displays the red component of each of the reference images superimposed on (e.g., summed with) the green and blue components of each of the associated live images provided by each of the cameras. When a reference image and a live image are aligned, the red and green colors in the images align to provide a correctly colored RGB image in the region where the images align. Unaligned regions appear as green or red regions that indicate the PPA or patient support is not in the same position or configuration as the position or configuration shown in the reference images. As described in the Examples, the OGTS is used to calculate the misalignment between the reference images and the live images. The cameras are calibrated to provide a defined pixel size per mm relationship at the isocenter plane in actual space (e.g., approximately 1 to 5 pixels/mm (e.g., approximately 1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.0, 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9, 3.0, 3.1, 3.2, 3.3, 3.4, 3.5, 3.6, 3.7, 3.8, 3.9, 4.0, 4.1, 4.2, 4.3, 4.4, 4.5, 4.6, 4.7, 4.8, 4.9, or 5.0 pixels/mm). Accordingly, the ∆X, ∆Y, and/or ∆Z displacement(s) appropriate to position and/or configure the PPA or patient support or components thereof to the position in real space recorded by the reference images is obtained by aligning the live and reference images in each camera view, determining the distance in image pixels needed to align the live and reference images, and calculating the ∆X, ∆Y, and/or ∆Z displacement according to the pixel size ASTO-41250.601 per mm relationship. Furthermore, the OGTS software also allows for rotating images to determine ∆ψ, ∆φ, and ∆θ to correct for rotations of the patient about the X, Y, and Z axes relative to the reference images previously captured. Accordingly, embodiments comprise using the OGTS to determine 1420 if the PPA or patient support is configured and/or positioned correctly. If the PPA or patient support is not configured or positioned correctly (NO), a user may use the information from the OGTS relating to the ∆X, ∆Y, and/or ∆Z displacement and/or ∆ψ, ∆φ, and ∆θ rotations appropriate for configuring the PPA or patient support correctly, and the step of configuring 1412 the PPA or patient support and the step of determining 1420 if the PPA is configured correctly may be repeated. If the PPA or patient support is configured and/or positioned correctly (YES), then the method for subsequent patient immobilization 1400 proceeds to the next step of retrieving 1431 a saved patient position setup scene. As shown in FIG.10C, embodiments of a method for subsequent patient immobilization 1400 comprise retrieving 1431 a saved patient position setup scene to provide a retrieved patient position setup scene. In some embodiments, a saved patient position setup scene was previously saved during performing a method for initial patient immobilization 1300 (e.g., comprising saving 1350 a patient position setup scene). The retrieved patient position setup scene comprises saved images of the patient position (e.g., images showing views of the patient from at least three orthogonal directions), a list of cameras that provided the saved images of the patient position, and the region of interest of each of the selected cameras that provided the images during image acquisition. The images show orthogonal views of the patient positioned on the PPA or patient support in a patient posture appropriate for treatment. In some embodiments, methods further comprise displaying each of the saved images showing views of the patient position from at least three orthogonal directions on a display in a separate window so that each orthogonal view is viewable by a user in each of the separate windows. Each saved and displayed image shows a view of the patient position setup provided by the selected cameras listed in the retrieved configuration setup scene. The images of the retrieved patient position setup scene (e.g., displayed on the display) provide reference images for correctly positioning the patient in the appropriate patient position (e.g., patient posture). Next, in some embodiments, methods comprise loading 1432 a patient on the PPA or patient support and positioning 1433 the patient on the PPA or patient support. In some embodiments, the retrieved patient position setup scene comprises information describing the patient position (e.g., patient posture) and/or the configuration of the PPA ASTO-41250.601 or patient support for use in positioning 1433 the patient in the correct patient position. For instance, in some embodiments, the retrieved patient position setup scene comprises information describing the location and/or the position of one or more components of the PPA or patient support (e.g., the position of one or more of a back rest, a seat pan, a shin rest, an arm rest, a head rest, and/or foot braces or heel stops) to provide the appropriate patient position. In some embodiments, positioning 1433 the patient comprises positioning the patient to a standard patient position (e.g., a standard patient posture) that may be modified as needed by adjusting the configuration of the PPA or patient support (e.g., by adjusting the position of one or more of a back rest, a seat pan, a shin rest, an arm rest, a head rest, and/or foot braces or heel stops). Next, embodiments of methods for subsequent patient immobilization 1400 provided herein comprise determining 1440 if the patient is positioned correctly. In some embodiments, determining 1440 if the patient is positioned correctly comprises using the images of the retrieved patient position setup scene (e.g., displayed on the display) as reference images and live video of the positioned patient for correctly positioning the patient (e.g., by configuring the PPA or patient support). In particular, the information in the retrieved patient position setup scene providing the list of cameras that provided the saved images of the patient position and the region of interest used for each of the selected cameras during image acquisition is used to select the same cameras to provide live video of the patient and to set the region of interest for each of the selected cameras to the same region of interest that was saved for each of the images of the saved patient position setup scene. Accordingly, the live video provided by the selected cameras shows the same views (e.g., same orthogonal views) of the same regions of interest of the patient and patient position that were saved in the saved patient position setup scene and provided in the retrieved patient position setup scene. A user interacts with the GUI of the OGTS software to initiate tracking, which acquires live video from each of the selected cameras. In some embodiments, methods comprise drawing a reference line or reference mark on a live tracking image that marks a reference point in the room, e.g., the treatment isocenter or imaging position. In some embodiments, methods comprise viewing the patient on the PPA or patient support on the live tracking image and adjusting the patient and/or the PPA or patient support using the reference line or reference mark provided on the live tracking image that marks the reference point. The live video from each of the selected cameras showing the patient position is superimposed on the display over the associated saved reference image of the patient ASTO-41250.601 position previously saved by the same camera. When in tracking mode, the OGTS software displays the red component of each of the reference images superimposed on (e.g., summed with) the green and blue components of each of the associated live images provided by each of the cameras. When a reference image and a live image are aligned, the red and green colors in the images align to provide a correctly colored RGB image in the region where the images align. Unaligned regions appear as green or red regions that indicate the patient is not in the same position as the position shown in the reference images. As described in the Examples, the OGTS is used to calculate the misalignment between the reference images and the live images. The cameras are calibrated to provide a defined pixel size per mm relationship at the isocenter plane in actual space (e.g., approximately 1 to 5 pixels/mm (e.g., approximately 1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.0, 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9, 3.0, 3.1, 3.2, 3.3, 3.4, 3.5, 3.6, 3.7, 3.8, 3.9, 4.0, 4.1, 4.2, 4.3, 4.4, 4.5, 4.6, 4.7, 4.8, 4.9, or 5.0 pixels/mm). Accordingly, the ∆X, ∆Y, and/or ∆Z displacement(s) appropriate to position the patient in real space to match the position recorded by the reference images is obtained by aligning the live and reference images in each camera view, determining the distance in image pixels needed to align the live and reference images, and calculating the ∆X, ∆Y, and/or ∆Z displacement according to the pixel size per mm relationship. Furthermore, the OGTS software also allows for rotating images to determine ∆ψ, ∆φ, and ∆θ to correct for rotations of the patient about the X, Y, and Z axes relative to the reference images previously captured. Accordingly, embodiments comprise using the OGTS to determine 1440 if the patient is positioned correctly. If the patient is not positioned correctly (NO), a user may use the information from the OGTS relating to the ∆X, ∆Y, and/or ∆Z displacement and/or ∆ψ, ∆φ, and ∆θ rotations appropriate for positioning the patient correctly, and the step of positioning 1443 the patient and the step of determining 1440 if the patient is positioned correctly may be repeated. If the patient is positioned correctly (YES), then the method for subsequent patient immobilization 1400 proceeds to the next step of retrieving 1450 a saved imaging position setup scene. As shown in FIG.10C, embodiments of a method for subsequent patient immobilization 1400 comprise retrieving 1450 a saved imaging position setup scene to provide a retrieved imaging position setup scene. In some embodiments, a saved imaging position setup scene was previously saved during performing a method for initial patient immobilization 1300 (e.g., comprising saving 1390 an imaging setup scene). The retrieved imaging position setup scene comprises saved images of the patient and PPA or patient support in the imaging position (e.g., images showing views ASTO-41250.601 of the patient and PPA or patient support in the imaging position from at least three orthogonal directions), a list of cameras that provided the saved images of the patient and PPA or patient support in the imaging position, and the region of interest of each of the selected cameras that provided the images during image acquisition. The images show orthogonal views of the patient positioned on the PPA or patient support in a patient posture appropriate for treatment and at the proper imaging position. In some embodiments, methods further comprise displaying each of the saved images showing views of the patient and PPA or patient support in the imaging position from at least three orthogonal directions on a display in a separate window so that each orthogonal view is viewable by a user in each of the separate windows. Each saved and displayed image shows a view of the imaging position setup provided by the selected cameras listed in the retrieved imaging position setup scene. The images of the imaging position setup scene (e.g., displayed on the display) provide reference images for correctly positioning the patient in the appropriate imaging position. Next, in some embodiments, methods comprise moving 1460 the patient and PPA or patient support to the imaging position. In some embodiments, the retrieved imaging position setup scene comprises information describing the imaging position for use in moving 1460 the patient and PPA or patient support to the correct imaging position. Next, embodiments of methods for subsequent patient immobilization 1400 provided herein comprise determining 1470 if the patient is ready for imaging. In some embodiments, determining 1470 if the patient is ready for imaging comprises using the images of the retrieved imaging position setup scene (e.g., displayed on the display) as reference images and live video of the patient positioned on the PPA or patient support for correctly positioning the patient on the PPA or patient support for imaging. In particular, the information in the retrieved imaging position setup scene providing the list of cameras that provided the saved images of the imaging position and the region of interest used for each of the selected cameras during image acquisition is used to select the same cameras to provide live video of the patient on the PPA or patient support and to set the region of interest for each of the selected cameras to the same region of interest that was saved for each of the images of the imaging position setup scene. Accordingly, the live video provided by the selected cameras shows the same views (e.g., same orthogonal views) of the same regions of interest of the patient on the PPA or patient support that were saved in the imaging position setup scene and provided in the retrieved imaging position scene. A user interacts with the GUI of the OGTS software to initiate tracking, which acquires live video from each of the selected cameras. In some ASTO-41250.601 embodiments, methods comprise drawing a reference line or reference mark on a live tracking image that marks a reference point in the room, e.g., the treatment isocenter or imaging position. In some embodiments, methods comprise viewing the patient and the PPA or patient support on a live tracking image and adjusting the patient and/or PPA or patient support using the reference line or reference mark provided on the live tracking image that marks the reference point. The live video from each of the selected cameras showing the imaging position is superimposed on the display over the associated saved reference image of the imaging position previously saved by the same camera. When in tracking mode, the OGTS software displays the red component of each of the reference images superimposed on (e.g., summed with) the green and blue components of each of the associated live images provided by each of the cameras. When a reference image and a live image are aligned, the red and green colors in the images align to provide a correctly colored RGB image in the region where the images align. Unaligned regions appear as green or red regions that indicate the patient is not in the same imaging position as the imaging position shown in the reference images. As described in the Examples, the OGTS is used to calculate the misalignment between the reference images and the live images. The cameras are calibrated to provide a defined pixel size per mm relationship at the isocenter plane in actual space (e.g., approximately 1 to 5 pixels/mm (e.g., approximately 1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.0, 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9, 3.0, 3.1, 3.2, 3.3, 3.4, 3.5, 3.6, 3.7, 3.8, 3.9, 4.0, 4.1, 4.2, 4.3, 4.4, 4.5, 4.6, 4.7, 4.8, 4.9, or 5.0 pixels/mm). Accordingly, the ∆X, ∆Y, and/or ∆Z displacement(s) appropriate to position the patient and PPA or patient support in real space to match the imaging position recorded by the reference images are obtained by aligning the live and reference images in each camera view, determining the distance in image pixels needed to align the live and reference images, and calculating the ∆X, ∆Y, and/or ∆Z displacement according to the pixel size per mm relationship. Furthermore, the OGTS software also allows for rotating images to determine ∆ψ, ∆φ, and ∆θ to correct for rotations of the patient about the X, Y, and Z axes relative to the reference images previously captured. Accordingly, embodiments comprise using the OGTS to determine 1470 if the patient is ready for imaging. If the patient is not ready for imaging (NO), a user may use the information from the OGTS relating to the ∆X, ∆Y, and/or ∆Z displacement and/or ∆ψ, ∆φ, and ∆θ rotations appropriate for positioning the patient and PPA or patient support correctly for imaging, and the step of moving 1460 the patient and PPA or patient support and the step of determining 1470 if the patient is ready for imaging may be repeated. If the ASTO-41250.601 patient is ready for imaging (YES), then the method for subsequent patient immobilization 1400 proceeds to the next step of obtaining 1500 a CT scan. FIG. 10D shows an embodiment of a method for obtaining 1500 a CT scan (e.g., a pre-treatment CT scan) of a patient, e.g., after performing an initial patient immobilization method 1300 (FIG. 10B) or performing a subsequent patient immobilization method 1400 (FIG. 10C). As shown in FIG. 10D, embodiments of methods for obtaining 1500 a CT scan of a patient comprise starting 1510 a CT scan, e.g., to obtain a CT scan of the patient. In some embodiments, a CT scan is obtained using a multi-axis medical imaging apparatus as described in U.S. Pat. App. Pub. No. 2022/0183641, which is incorporated herein by reference. Next, methods for obtaining 1500 a CT scan of a patient comprise determining 1520 if the CT scan is completed. In some embodiments, determining 1520 if the CT scan is completed comprises determining if the CT scan is of adequate quality for treatment planning and treatment of the patient. If the CT scan is not completed (NO), then the steps of starting 1510 the CT scan and determining 1520 if the CT scan is completed are repeated. If the CT scan is completed (YES), then the method for obtaining 1500 a CT scan of a patient comprises saving the CT scan as a pre-treatment CT scan of the patient to be used subsequently for treatment planning and treatment. Next, the method proceeds to moving 1530 the PPA or patient support to an unloading position and unloading 1540 the patient from the PPA or patient support. If the method for obtaining 1500 a CT scan of a patient was performed after performing an initial patient immobilization method 1300, extra care is taken during the unloading 1540 to minimally disturb the PPA or patient support and thus preserve the configuration of the PPA or patient support previously determined in step 1330 of the initial patient immobilization method 1300. Next, if the method for obtaining 1500 a CT scan of a patient was performed after performing a subsequent patient immobilization method 1400 (SUBSEQUENT), then the OGTS session is ended. If the method for obtaining 1500 a CT scan of a patient was performed after performing an initial patient immobilization method 1300, then the method comprises saving 1560 the configuration setup scene. Saving 1560 the configuration setup scene primarily records the configuration of the PPA or patient support for subsequent use in supporting a patient during treatment. Saving 1560 the configuration setup scene comprises saving a list of the selected cameras providing images of the PPA or patient support configuration, saving each of the images of the PPA or patient support configuration provided by each of the selected cameras, and saving the region of interest of each of the selected cameras during image acquisition. In ASTO-41250.601 some embodiments, saving 1560 the configuration setup scene comprises optionally saving patient identifying information, saving the date and time when the configuration setup scene is saved, saving the type of treatment to be performed on the patient in a subsequent treatment phase, saving information identifying the OGTS user performing the method for pre-treatment patient immobilization and imaging 1000, saving information describing the positions of the PPA or patient support or components thereof, etc. Further, embodiments of the technology provided herein relate to methods of treating a patient with radiation 2000. For example, e.g., as shown in FIG. 11A, methods of treating a patient comprise using the OGTS system as described herein for treating a patient. The method shown in FIG.11A comprises starting 2100 an OGTS session, loading 2200 a patient on a PPA or patient support in a position appropriate for treatment (FIG. 11B), imaging 2300 the patient (FIG.11C), treating 2400 the patient (FIG. 11D), and ending the OGTS session. As shown in FIG.11B, embodiments of methods for loading 2200 a patient on a PPA or patient support in a position appropriate for treatment comprise moving 2210 a PPA or patient support to a loading position. Next, methods for loading 2200 a patient on a PPA or patient support in a position appropriate for treatment comprise retrieving 2220 a saved configuration setup scene to provide a retrieved configuration setup scene. In some embodiments, a saved configuration setup scene was previously saved during performing a method for obtaining 1500 a CT scan of a patient after performing a method for initial patient immobilization 1300 (e.g., comprising saving 1560 a configuration setup scene). The retrieved configuration setup scene comprises saved images of the PPA or patient support configuration (e.g., images showing views of the PPA or patient support from at least three orthogonal directions), a list of cameras that provided the saved images of the PPA or patient support configuration, and the region of interest of each of the selected cameras that provided the images during image acquisition. In some embodiments, methods further comprise displaying each of the saved images showing views of the PPA or patient support from at least three orthogonal directions on a display in a separate window so that each orthogonal view is viewable by a user in each of the separate windows. Each saved and displayed image shows a view of the configuration setup provided by the selected cameras listed in the saved configuration setup scene. The images of the retrieved configuration setup scene (e.g., displayed on the display) provide reference images for correctly configuring the PPA patient support. In some embodiments, methods comprise drawing a reference line ASTO-41250.601 or reference mark on a live tracking image that marks a reference point in the room, e.g., the treatment isocenter. In some embodiments, methods comprise viewing the PPA or patient support on the live tracking image and adjusting the PPA or patient support using the reference line or reference mark provided on the live tracking image that marks the reference point. Next, in some embodiments, methods for loading 2200 a patient on a PPA or patient support in a position appropriate for treatment comprise configuring 2230 a PPA or patient support. In some embodiments, the retrieved configuration setup scene comprises information describing the configuration of the PPA or patient support for use in configuring 2230 the PPA or patient support. For instance, in some embodiments, the retrieved configuration setup scene comprises information describing the location of the PPA or patient support and/or the position of one or more components of the PPA or patient support (e.g., the position of one or more of a back rest, a seat pan, a shin rest, an arm rest, a head rest, and/or foot braces or heel stops). In some embodiments, configuring 2230 a PPA or patient support comprises configuring the PPA or patient support according to a standard preset describing the approximate location of the PPA or patient support and/or the approximate position of one or more components of the PPA or patient support (e.g., the position of one or more of a back rest, a seat pan, a shin rest, an arm rest, a head rest, and/or foot braces or heel stops). Next, embodiments of methods for loading 2200 a patient on a PPA or patient support in a position appropriate for treatment comprise determining 2240 if the PPA or patient support is configured correctly. In some embodiments, determining 2240 if the PPA or patient support is configured correctly comprises using the images of the retrieved configuration setup scene (e.g., displayed on the display) as reference images and live video of the PPA or patient support for correctly configuring the PPA patient support. In particular, the information saved in the configuration setup scene providing the list of cameras that provided the saved images of the PPA or patient support configuration and the region of interest used for each of the selected cameras during image acquisition is used to select the same cameras to provide live video of the PPA or patient support and to set the region of interest for each of the selected cameras to the same region of interest that was saved for each of the cameras in the saved configuration setup scene. Accordingly, the live video provided by the selected cameras shows the same views (e.g., same orthogonal views) of the same regions of interest of the PPA or patient support that were saved in the saved configuration setup scene and that are provided in the retrieved configuration setup scene. A user interacts with the GUI of ASTO-41250.601 the OGTS software to initiate tracking, which acquires live video from each of the selected cameras. In some embodiments, methods comprise drawing a reference line or reference mark on a live tracking image that marks a reference point in the room, e.g., the treatment isocenter or imaging position. In some embodiments, methods comprise viewing the PPA or patient support on the live tracking image and adjusting the PPA or patient support using the reference line or reference mark provided on the live tracking image that marks the reference point. The live video from each of the selected cameras showing the PPA or patient support is superimposed on the display over the associated reference image of the PPA or patient support previously saved by the same camera. When in tracking mode, the OGTS software displays the red component of each of the reference images superimposed on (e.g., summed with) the green and blue components of each of the associated live images provided by each of the cameras. When a reference image and a live image are aligned, the red and green colors in the images align to provide a correctly colored RGB image in the region where the images align. Unaligned regions appear as green or red regions that indicate the PPA or patient support is not in the same position or configuration as the position or configuration shown in the reference images. As described in the Examples, the OGTS is used to calculate the misalignment between the reference images and the live images. The cameras are calibrated to provide a defined pixel size per mm relationship at the isocenter plane in actual space (e.g., approximately 1 to 5 pixels/mm (e.g., approximately 1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.0, 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9, 3.0, 3.1, 3.2, 3.3, 3.4, 3.5, 3.6, 3.7, 3.8, 3.9, 4.0, 4.1, 4.2, 4.3, 4.4, 4.5, 4.6, 4.7, 4.8, 4.9, or 5.0 pixels/mm). Accordingly, the ∆X, ∆Y, and/or ∆Z displacement(s) appropriate to position and/or configure the PPA or patient support or components thereof to the position in real space recorded by the reference images is obtained by aligning the live and reference images in each camera view, determining the distance in image pixels needed to align the live and reference images, and calculating the ∆X, ∆Y, and/or ∆Z displacement according to the pixel size per mm relationship. Furthermore, the OGTS software also allows for rotating images to determine ∆ψ, ∆φ, and ∆θ to correct for rotations of the patient about the X, Y, and Z axes relative to the reference images previously captured. Accordingly, embodiments comprise using the OGTS to determine 2240 if the PPA or patient support is configured and/or positioned correctly. If the PPA or patient support is not configured or positioned correctly (NO), a user may use the information from the OGTS relating to the ∆X, ∆Y, and/or ∆Z displacement and/or ∆ψ, ∆φ, and ∆θ rotations appropriate for configuring the ASTO-41250.601 PPA or patient support correctly, and the step of configuring 2230 the PPA or patient support and the step of determining 2240 if the PPA is configured correctly may be repeated. If the PPA or patient support is configured and/or positioned correctly (YES), then the method for loading 2200 a patient on a PPA or patient support in a position appropriate for treatment proceeds to the next step of retrieving 2250 a saved patient position setup scene. As shown in FIG.11B, embodiments of methods for loading 2200 a patient on a PPA or patient support in a position appropriate for treatment comprise retrieving 2250 a saved patient position setup scene to provide a retrieved patient position setup scene. In some embodiments, a saved patient position setup scene was previously saved during performing a method for initial patient immobilization 1300 (e.g., comprising saving 1350 a patient position setup scene). The retrieved patient position setup scene comprises saved images of the patient position (e.g., images showing views of the patient from at least three orthogonal directions), a list of cameras that provided the saved images of the patient position, and the region of interest of each of the selected cameras that provided the images during image acquisition. The images show orthogonal views of the patient positioned on the PPA or patient support in a patient posture appropriate for treatment. In some embodiments, methods further comprise displaying each of the saved images showing views of the patient position from at least three orthogonal directions on a display in a separate window so that each orthogonal view is viewable by a user in each of the separate windows. Each saved and displayed image shows a view of the patient position setup provided by the selected cameras listed in the retrieved configuration setup scene. The images of the retrieved patient position setup scene (e.g., displayed on the display) provide reference images for correctly positioning the patient in the appropriate patient position (e.g., patient posture). Next, in some embodiments, methods comprise loading 2260 a patient on the PPA or patient support and positioning 2270 the patient on the PPA or patient support. In some embodiments, the retrieved patient position setup scene comprises information describing the patient position (e.g., patient posture) and/or the configuration of the PPA or patient support for use in positioning 2270 the patient in the correct patient position. For instance, in some embodiments, the retrieved patient position setup scene comprises information describing the location and/or the position of one or more components of the PPA or patient support (e.g., the position of one or more of a back rest, a seat pan, a shin rest, an arm rest, a head rest, and/or foot braces or heel stops) to provide the appropriate patient position. In some embodiments, positioning 2270 the patient comprises ASTO-41250.601 positioning the patient to a standard patient position (e.g., a standard patient posture) that may be modified as needed by adjusting the configuration of the PPA or patient support (e.g., by adjusting the position of one or more of a back rest, a seat pan, a shin rest, an arm rest, a head rest, and/or foot braces or heel stops). Next, embodiments of methods for loading 2200 a patient on a PPA or patient support in a position appropriate for treatment comprise determining 2280 if the patient is positioned correctly. In some embodiments, determining 2280 if the patient is positioned correctly comprises using the images of the retrieved patient position setup scene (e.g., displayed on the display) as reference images and live video of the positioned patient for correctly positioning the patient (e.g., by configuring the PPA or patient support). In particular, the information in the retrieved patient position setup scene providing the list of cameras that provided the saved images of the patient position and the region of interest used for each of the selected cameras during image acquisition is used to select the same cameras to provide live video of the patient and to set the region of interest for each of the selected cameras to the same region of interest that was saved for each of the images of the saved patient position setup scene. Accordingly, the live video provided by the selected cameras shows the same views (e.g., same orthogonal views) of the same regions of interest of the patient and patient position that were saved in the saved patient position setup scene and provided in the retrieved patient position setup scene. A user interacts with the GUI of the OGTS software to initiate tracking, which acquires live video from each of the selected cameras. In some embodiments, methods comprise drawing a reference line or reference mark on a live tracking image that marks a reference point in the room, e.g., the treatment isocenter or imaging position. In some embodiments, methods comprise viewing the patient on the PPA or patient support on the live tracking image and adjusting the patient and/or the PPA or patient support using the reference line or reference mark provided on the live tracking image that marks the reference point. The live video from each of the selected cameras showing the patient position is superimposed on the display over the associated saved reference image of the patient position previously saved by the same camera. When in tracking mode, the OGTS software displays the red component of each of the reference images superimposed on (e.g., summed with) the green and blue components of each of the associated live images provided by each of the cameras. When a reference image and a live image are aligned, the red and green colors in the images align to provide a correctly colored RGB image in the region where the images align. Unaligned regions appear as green or red regions ASTO-41250.601 that indicate the patient is not in the same position as the position shown in the reference images. As described in the Examples, the OGTS is used to calculate the misalignment between the reference images and the live images. The cameras are calibrated to provide a defined pixel size per mm relationship at the isocenter plane in actual space (e.g., approximately 1 to 5 pixels/mm (e.g., approximately 1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.0, 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9, 3.0, 3.1, 3.2, 3.3, 3.4, 3.5, 3.6, 3.7, 3.8, 3.9, 4.0, 4.1, 4.2, 4.3, 4.4, 4.5, 4.6, 4.7, 4.8, 4.9, or 5.0 pixels/mm). Accordingly, the ∆X, ∆Y, and/or ∆Z displacement(s) appropriate to position the patient in real space to match the position recorded by the reference images is obtained by aligning the live and reference images in each camera view, determining the distance in image pixels needed to align the live and reference images, and calculating the ∆X, ∆Y, and/or ∆Z displacement according to the pixel size per mm relationship. Furthermore, the OGTS software also allows for rotating images to determine ∆ψ, ∆φ, and ∆θ to correct for rotations of the patient about the X, Y, and Z axes relative to the reference images previously captured. Accordingly, embodiments comprise using the OGTS to determine 2280 if the patient is positioned correctly. If the patient is not positioned correctly (NO), a user may use the information from the OGTS relating to the ∆X, ∆Y, and/or ∆Z displacement and/or ∆ψ, ∆φ, and ∆θ rotations appropriate for positioning the patient correctly, and the step of positioning 2270 the patient and the step of determining 2280 if the patient is positioned correctly may be repeated. If the patient is positioned correctly (YES), then the method for loading 2200 a patient on a PPA or patient support in a position appropriate for treatment proceeds to the next step of imaging 2300 the patient for treatment. As shown in FIG.11C, embodiments of a method for imaging 2300 a patient for treatment comprise retrieving 2310 a saved imaging position setup scene to provide a retrieved imaging position setup scene. In some embodiments, a saved imaging position setup scene was previously saved during performing a method for initial patient immobilization 1300 (e.g., comprising saving 1390 an imaging setup scene). The retrieved imaging position setup scene comprises saved images of the patient and PPA or patient support in the imaging position (e.g., images showing views of the patient and PPA or patient support in the imaging position from at least three orthogonal directions), a list of cameras that provided the saved images of the patient and PPA or patient support in the imaging position, and the region of interest of each of the selected cameras that provided the images during image acquisition. The images show orthogonal views of the patient positioned on the PPA or patient support in a patient ASTO-41250.601 posture appropriate for treatment and at the proper imaging position. In some embodiments, methods further comprise displaying each of the saved images showing views of the patient and PPA or patient support in the imaging position from at least three orthogonal directions on a display in a separate window so that each orthogonal view is viewable by a user in each of the separate windows. Each saved and displayed image shows a view of the imaging position setup provided by the selected cameras listed in the retrieved imaging position setup scene. The images of the imaging position setup scene (e.g., displayed on the display) provide reference images for correctly positioning the patient in the appropriate imaging position. Next, in some embodiments, methods for imaging 2300 a patient for treatment comprise moving 2320 the patient and PPA or patient support to the imaging position. In some embodiments, the retrieved imaging position setup scene comprises information describing the imaging position for use in moving 2320 the patient and PPA or patient support to the correct imaging position. Next, embodiments of methods for imaging 2300 a patient for treatment comprise determining 2230 if the patient is ready for imaging. In some embodiments, determining 2230 if the patient is ready for imaging comprises using the images of the retrieved imaging position setup scene (e.g., displayed on the display) as reference images and live video of the patient positioned on the PPA or patient support for correctly positioning the patient on the PPA or patient support for imaging. In particular, the information in the retrieved imaging position setup scene providing the list of cameras that provided the saved images of the imaging position and the region of interest used for each of the selected cameras during image acquisition is used to select the same cameras to provide live video of the patient on the PPA or patient support and to set the region of interest for each of the selected cameras to the same region of interest that was saved for each of the images of the imaging position setup scene. Accordingly, the live video provided by the selected cameras shows the same views (e.g., same orthogonal views) of the same regions of interest of the patient on the PPA or patient support that were saved in the imaging position setup scene and provided in the retrieved imaging position scene. A user interacts with the GUI of the OGTS software to initiate tracking, which acquires live video from each of the selected cameras. In some embodiments, methods comprise drawing a reference line or reference mark on a live tracking image that marks a reference point in the room, e.g., the treatment isocenter. In some embodiments, methods comprise viewing the patient on the PPA or patient support on the live tracking image and adjusting the patient and/or the PPA or patient support using the ASTO-41250.601 reference line or reference mark provided on the live tracking image that marks the reference point. The live video from each of the selected cameras showing the imaging position is superimposed on the display over the associated saved reference image of the imaging position previously saved by the same camera. When in tracking mode, the OGTS software displays the red component of each of the reference images superimposed on (e.g., summed with) the green and blue components of each of the associated live images provided by each of the cameras. When a reference image and a live image are aligned, the red and green colors in the images align to provide a correctly colored RGB image in the region where the images align. Unaligned regions appear as green or red regions that indicate the patient is not in the same imaging position as the imaging position shown in the reference images. As described in the Examples, the OGTS is used to calculate the misalignment between the reference images and the live images. The cameras are calibrated to provide a defined pixel size per mm relationship at the isocenter plane in actual space (e.g., approximately 1 to 5 pixels/mm (e.g., approximately 1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.0, 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9, 3.0, 3.1, 3.2, 3.3, 3.4, 3.5, 3.6, 3.7, 3.8, 3.9, 4.0, 4.1, 4.2, 4.3, 4.4, 4.5, 4.6, 4.7, 4.8, 4.9, or 5.0 pixels/mm). Accordingly, the ∆X, ∆Y, and/or ∆Z displacement(s) appropriate to position the patient and PPA or patient support in real space to match the imaging position recorded by the reference images are obtained by aligning the live and reference images in each camera view, determining the distance in image pixels needed to align the live and reference images, and calculating the ∆X, ∆Y, and/or ∆Z displacement according to the pixel size per mm relationship. Furthermore, the OGTS software also allows for rotating images to determine ∆ψ, ∆φ, and ∆θ to correct for rotations of the patient about the X, Y, and Z axes relative to the reference images previously captured. Accordingly, embodiments comprise using the OGTS to determine 2230 if the patient is ready for imaging. If the patient is not ready for imaging (NO), a user may use the information from the OGTS relating to the ∆X, ∆Y, and/or ∆Z displacements and/or ∆ψ, ∆φ, and ∆θ rotations appropriate for positioning the patient and PPA or patient support correctly for imaging, and the step of moving 2320 the patient and PPA or patient support and the step of determining 2230 if the patient is ready for imaging may be repeated. If the patient is ready for imaging (YES), then the method for imaging 2300 a patient for treatment proceeds to the next step of obtaining a CT scan. As shown in FIG.11C, embodiments of methods for imaging 2300 a patient for treatment comprise obtaining a CT scan (e.g., a treatment CT scan). In particular, ASTO-41250.601 methods for imaging 2300 a patient for treatment comprise starting 2240 a CT scan, e.g., to obtain a CT scan of the patient. In some embodiments, a CT scan is obtained using a multi-axis medical imaging apparatus as described in U.S. Pat. App. Pub. No. 2022/0183641, which is incorporated herein by reference. Next, methods for imaging 2300 a patient for treatment comprise determining 2250 if the CT scan is completed. In some embodiments, determining 2250 if the CT scan is completed comprises determining if the CT scan is of adequate quality for treatment of the patient. If the CT scan is not completed (NO), then the steps of starting 2240 the CT scan and determining 2250 if the CT scan is completed are repeated. If the CT scan is completed (YES), then the method for imaging 2300 a patient for treatment comprises saving the CT scan of the patient as a treatment CT scan of the patient to be used subsequently for registration with a pre-treatment CT scan and for subsequent treatment. Next, the methods of treating a patient with radiation 2000 proceed to treating 2400 the patient with radiation. As shown in FIG.11D, methods of treating 2440 a patient with radiation comprise obtaining a pre-treatment CT scan (e.g., as provided by a method for obtaining 1500 a CT scan of a patient during a pre-treatment patient immobilization and imaging phase 1000, wherein the method comprises saving the CT scan as a pre-treatment CT scan of the patient) and obtaining a treatment CT scan (e.g., as provided by a method for imaging 2300 a patient during a treatment phase 2000, wherein the method comprises saving a CT scan of a patient as a treatment CT scan of the patient). Next, methods comprise registering 2410 the treatment CT scan and the pre-treatment CT scan and obtaining a correction vector. In some embodiments, registering 2410 the treatment CT scan and the pre-treatment CT scan provides for matching a treatment plan to the treatment volume of interest within the patient body so that radiation treatment is delivered accurately to the treatment volume of interest. In some embodiments, registering 2410 the treatment CT scan and the pre-treatment CT scan provides for detecting and evaluating anatomical changes that may have occurred after obtaining the pre-treatment scan and prior to the treatment phase. Accordingly, registering 2410 the treatment CT scan and the pre-treatment scan provides important information for patient treatment. Differences between the pre-treatment CT scan and the treatment CT scan are used to determine a correction vector to align the treatment plan (e.g., the treatment beam) to contact the treatment volume of interest within the patient body. In some embodiments, methods comprise verifying correct application of the correction vector to position the patient, e.g., as described below. ASTO-41250.601 The technology finds use in treating a number f of treatment fields (e.g., 1, 2, 3, 4, 5, … , f treatment fields) comprising the treatment volume of interest within the patient body. Thus, methods comprise treating a treatment field n, where n is iterated from 1 (treatment field 1) to the number of treatment fields f (treatment field f) to be treated. Next, methods of treating 2440 a patient with radiation comprise selecting 2420 a treatment field n; moving 2430 the patient positioned on the PPA or patient support and applying the correction vector obtained in step 2410 to the radiation treatment plan. Next, methods of treating 2400 a patient with radiation comprise determining 2440 if treatment of the patient at treatment field n is the first instance of treating the patient at treatment field n. If treatment of the patient at treatment field n is the first instance of treating the patient at treatment field n (YES), then methods of treating 2440 a patient with radiation comprise saving 2460 a treatment scene for field n. If treatment of the patient at treatment field n is not the first instance of treating the patient at treatment field n (NO), then methods of treating 2440 a patient with radiation comprise retrieving 2450 a treatment scene for treatment field n. Saving 2460 a treatment scene for treatment field n comprises saving a list of the selected cameras providing images of the patient and PPA or patient support in position for treatment of treatment field n, saving each of the images of the patient and PPA or patient support in position for treatment of treatment field n provided by each of the selected cameras, and saving the region of interest of each of the selected cameras. In some embodiments, saving 2460 the treatment scene for treatment field n comprises optionally saving patient identifying information, saving the date and time when the treatment scene for treatment field n is saved, saving the type of treatment to be performed on the patient, saving information identifying the OGTS user performing the method of treating the patient 2440, etc. After saving 2460 the treatment scene for treatment field n, the method comprises determining 2470 if the patient is ready for treatment as described below. Retrieving 2450 a saved treatment scene for treatment field n provides a retrieved treatment scene for treatment field n. In some embodiments, a saved treatment scene for treatment field n was previously saved during performing a method of treating 2400 a patient with radiation (e.g., comprising saving 2460 a saved treatment scene for treatment field n). The retrieved treatment scene for treatment field n comprises saved images of the patient and PPA or patient support in position for treatment of treatment field n (e.g., images showing views of the patient and PPA or patient support in position for treatment of treatment field n from at least three ASTO-41250.601 orthogonal directions), a list of cameras that provided the saved images of patient and PPA or patient support in position for treatment of treatment field n, and the region of interest of each of the selected cameras that provided the images during image acquisition. The images show orthogonal views of the patient and PPA or patient support in position for treatment of treatment field n. In some embodiments, methods further comprise displaying each of the saved images showing views of the patient and PPA or patient support in position for treatment of treatment field n from at least three orthogonal directions on a display in a separate window so that each orthogonal view is viewable by a user in each of the separate windows. Each saved and displayed image shows a view of the patient and PPA or patient support in position for treatment of treatment field n provided by the selected cameras listed in the retrieved treatment scene for treatment field n. The images of the retrieved treatment scene for treatment field n (e.g., displayed on the display) provide reference images for correctly positioning the patient and PPA or patient support in the appropriate location and/or position for treatment of treatment field n. Next, embodiments of methods of treating 2400 a patient with radiation comprise determining 2470 if the patient is ready for treatment. In some embodiments, determining 2470 if the patient is ready for treatment comprises using the images of the retrieved treatment scene for treatment field n (e.g., displayed on the display) as reference images and live video of the patient and the PPA or patient support for correctly positioning the patient and PPA or patient support in the appropriate location and/or position for treatment of treatment field n. In particular, the information in the treatment scene for treatment field n providing the list of cameras that provided the saved images of the patient and PPA or patient support in position for treatment of treatment field n and the region of interest used for each of the selected cameras during image acquisition is used to select the same cameras to provide live video of the patient on the PPA or patient support in position for treatment of treatment field n and to set the region of interest for each of the selected cameras to the same region of interest that was saved for each of the images of the treatment scene for treatment field n. Accordingly, the live video provided by the selected cameras shows the same views (e.g., same orthogonal views) of the same regions of interest of the patient and PPA or patient support in position for treatment of treatment field n that were saved in the treatment scene for treatment field n and provided in the retrieved treatment scene for treatment field n. A user interacts with ASTO-41250.601 the GUI of the OGTS software to initiate tracking, which acquires live video from each of the selected cameras. The live video from each of the selected cameras showing the patient and PPA or patient support in treatment position for treatment of treatment field n is superimposed on the display over the associated saved reference image of the patient and PPA or patient support in treatment position for treatment of treatment field n previously saved by the same camera. When in tracking mode, the OGTS software displays the red component of each of the reference images superimposed on (e.g., summed with) the green and blue components of each of the associated live images provided by each of the cameras. When a reference image and a live image are aligned, the red and green colors in the images align to provide a correctly colored RGB image in the region where the images align. Unaligned regions appear as green or red regions that indicate the patient is not in the same treatment position for treatment of treatment field n as the treatment position for treatment of treatment field n shown in the reference images. As described in the Examples, the OGTS is used to calculate the misalignment between the reference images and the live images. The cameras are calibrated to provide a defined pixel size per mm relationship at the isocenter plane in actual space (e.g., approximately 1 to 5 pixels/mm (e.g., approximately 1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.0, 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9, 3.0, 3.1, 3.2, 3.3, 3.4, 3.5, 3.6, 3.7, 3.8, 3.9, 4.0, 4.1, 4.2, 4.3, 4.4, 4.5, 4.6, 4.7, 4.8, 4.9, or 5.0 pixels/mm). Accordingly, the ∆X, ∆Y, and/or ∆Z displacement(s) appropriate to position the patient and PPA or patient support in real space to match the imaging position recorded by the reference images are obtained by aligning the live and reference images in each camera view, determining the distance in image pixels needed to align the live and reference images, and calculating the ∆X, ∆Y, and/or ∆Z displacement according to the pixel size per mm relationship. Furthermore, the OGTS software also allows for rotating images to determine ∆ψ, ∆φ, and ∆θ to correct for rotations of the patient about the X, Y, and Z axes relative to the reference images previously captured. Accordingly, embodiments comprise using the OGTS to determine 2470 if the patient is ready for treatment. If the patient is not ready for treatment (NO), a user may use the information from the OGTS relating to the ∆X, ∆Y, and/or ∆Z displacements and/or ∆ψ, ∆φ, and ∆θ rotations appropriate for positioning the patient and PPA or patient support correctly for treatment. Thus, if the patient is not ready for treatment (NO), the step of moving 2430 the patient and PPA or patient support and/or applying the correction vector, the step of determining 2440 if treatment of the patient at treatment field n is the first instance of treating the patient at ASTO-41250.601 treatment field n, the appropriate step of saving 2460 the treatment scene for treatment field n or retrieving 2450 a treatment scene for treatment field n, and the step of determining 2470 if the patient is ready for treatment are repeated. If the patient is ready for treatment (YES), then the method of treating 2400 a patient with radiation proceeds to the next step of treating treatment field n. In some embodiments, treating treatment field n comprises contacting a region of a patient body within treatment field n with radiation, e.g., photons (e.g., x-rays, electrons) or hadrons (e.g., protons, neutrons, heavy ions (e.g., carbon ions 4He ions, neon ions, etc.)) as known in the art. Next, methods of treating 2400 a patient with radiation comprise determining 2491 if the treatment of treatment field n is completed. In some embodiments, determining 2491 if the treatment of treatment field n is completed comprises determining if the treatment provided the correct amount of radiation to the treatment field n. If the treatment of treatment field n is not completed (NO), then the steps of treating 2480 the treatment field n and determining 2491 if the treatment of treatment field n is completed are repeated. If the treatment of treatment field n is completed (YES), then the method of treating 2400 a patient with radiation comprises determining 2492 if the treatment plan comprises treating another treatment field. If the treatment plan comprises treating another treatment field (YES), n is increased by 1 (e.g., n = n + 1) and the method returns to step 2420 of the method for treating 2400 a patient with radiation, which comprises selecting the next treatment field n (having the updated value of n). If the treatment plan does not comprise treating another treatment field (NO), then the mothed comprising ending 2500 the OGTS session. Methods for monitoring patient motion In some embodiments, the technology provides methods for monitoring patient motion, e.g., during treatment. Methods for monitoring patient motion find use in identifying patient movements during a treatment phase that may move a treatment field out of a treatment position or that may move healthy tissue into the path of radiation. Methods for monitoring patient motion also find use in monitoring a rhythmic change in the movement of the body patient, e.g., due to breathing motions. For instance, methods for monitoring patient motion comprise using reference image(s) and live video during a treatment phase to monitor and/or identify patient movements that may require intervention by a technician to correct a position of a patient and PPA or patient support. In some embodiments, methods for monitoring patient motion comprise providing images of the patient and PPA or patient support in ASTO-41250.601 position for treatment for use as reference images. In some embodiments, the reference images are provided by a retrieved treatment scene, e.g., as provided by a method or step of retrieving 2450 a saved treatment scene. In some embodiments, the reference images are provided by acquiring images of the patient and PPA or patient support in position for treatment of the patient, e.g., as provided by a method or step of saving 2460 a treatment scene. Accordingly, the reference images provide images of the patient and PPA or patient support in position for treatment, e.g., images showing views of the patient and PPA or patient support in position for treatment from at least three orthogonal directions. In some embodiments, methods further comprise displaying each of the saved images showing views of the patient and PPA or patient support in position for treatment from at least three orthogonal directions on a display in a separate window so that each orthogonal view is viewable by a user in each of the separate windows. Each saved and displayed image shows a view of the patient and PPA or patient support in position for treatment provided by the selected cameras listed in the retrieved treatment scene. The images of the treatment scene (e.g., displayed on the display) provide reference images for monitoring patient motion. In some embodiments, monitoring patient motion comprises using the reference images and live video of the patient and the PPA or patient support to monitor patient position relative to the reference images. In particular, in some embodiments, the information associated with the images of the treatment scene providing the list of cameras that provided the reference images and the region of interest used for each of the selected cameras during image acquisition is used to select the same cameras to provide live video of the patient on the PPA or patient support in position and to set the region of interest for each of the selected cameras to the same region of interest that was saved for each of the images of the treatment scene. Accordingly, the live video provided by the selected cameras shows the same views (e.g., same orthogonal views) of the same regions of interest of the patient and PPA or patient support in position for treatment that were saved in the treatment scene. A user interacts with the GUI of the OGTS software to initiate tracking, which acquires live video from each of the selected cameras. The live video from each of the selected cameras showing the patient and PPA or patient support is superimposed on the display over the associated reference image of the patient and PPA or patient support in treatment position. When in tracking mode, the OGTS software displays the red component of each of the reference images superimposed on (e.g., summed with) the green and blue components of each of the ASTO-41250.601 associated live images provided by each of the cameras. When a reference image and a live image are aligned, the red and green colors in the images align to provide a correctly colored RGB image in the region where the images align. Unaligned regions appear as green or red regions that indicate the patient is not in the same treatment position as the treatment position shown in the reference images. As described in the Examples, the OGTS is used to determine a misalignment between the reference images and the live images. The cameras are calibrated to provide a defined pixel size per mm relationship at the isocenter plane in actual space (e.g., approximately 1 to 5 pixels/mm (e.g., approximately 1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.0, 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9, 3.0, 3.1, 3.2, 3.3, 3.4, 3.5, 3.6, 3.7, 3.8, 3.9, 4.0, 4.1, 4.2, 4.3, 4.4, 4.5, 4.6, 4.7, 4.8, 4.9, or 5.0 pixels/mm). Accordingly, the ∆X, ∆Y, and/or ∆Z displacement(s) appropriate to position the patient and PPA or patient support in real space to match the imaging position recorded by the reference images are obtained by aligning the live and reference images in each camera view, determining the distance in image pixels needed to align the live and reference images, and calculating the ∆X, ∆Y, and/or ∆Z displacement according to the pixel size per mm relationship. Furthermore, the OGTS software also allows for rotating images to determine ∆ψ, ∆φ, and ∆θ to correct for rotations of the patient about the X, Y, and Z axes relative to the reference images previously captured. Accordingly, embodiments comprise using the OGTS to monitor patient motion by monitoring the alignment of the reference images with the live video. In some embodiments, a user may observe the alignment of the reference images with the live video and determine if the reference images and the live video are aligned or are mis- aligned. If the reference images and live video are mis-aligned, the user may determine if an intervention is required to stop treatment and/or re-align the patient. In some embodiments, image registration methods (e.g., a Lucas-Kanade image alignment algorithm, a Baker-Dellaert-Matthews image alignment algorithm, or the OpenCV image alignment package) are used to determine if the reference images and the live video are aligned or are mis-aligned. In some embodiments, a threshold for mismatch is set (e.g., from 1 to 50 mm (e.g., 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0, 5.5, 6.0, 6.5, 7.0, 7.5, 8.0, 8.5, 9.0, 9.5, 10.0, 10.5, 11.0, 11.5, 12.0, 12.5, 13.0, 13.5, 14.0, 14.5, 15.0, 15.5, 16.0, 16.5, 17.0, 17.5, 18.0, 18.5, 19.0, 19.5, 20.0, 20.5, 21.0, 21.5, 22.0, 22.5, 23.0, 23.5, 24.0, 24.5, 25.0, 25.5, 26.0, 26.5, 27.0, 27.5, 28.0, 28.5, 29.0, 29.5, 30.0, 30.5, 31.0, 31.5, 32.0, 32.5, 33.0, 33.5, 34.0, 34.5, 35.0, 35.5, 36.0, 36.5, 37.0, 37.5, 38.0, 38.5, 39.0, 39.5, 40.0, 40.5, 41.0, 41.5, 42.0, 42.5, 43.0, 43.5, 44.0, 44.5, 45.0, 45.5, 46.0, 46.5, 47.0, 47.5, ASTO-41250.601 48.0, 48.5, 49.0, 49.5, or 50.0 mm)). In some embodiments, methods comprise providing an alert or alarm (e.g., a visual, audio, or haptic alert) if the reference images and live video are mis-aligned more than the threshold value. In some embodiments, methods comprise suggesting a correction (e.g., a translation and/or a rotation) that will position the patient correctly so that treatment can proceed. For instance, a user may use the information from the OGTS relating to the ∆X, ∆Y, and/or ∆Z translation and/or ∆ψ, ∆φ, and ∆θ rotations appropriate for positioning the patient and PPA or patient support correctly for treatment. In some embodiments, methods comprise determining a breathing cycle for a patient and determining appropriate compensatory rhythmic translations and/or rotations of the patient to produce automatically through automated translations and/or rotations of the PPA or patient support. Methods for verifying correct application of a correction vector In some embodiments, the technology provides methods for verifying correct application of the correction vector. For example, methods comprise obtaining a correction vector (e.g., as described herein by obtaining a pre-treatment CT scan (e.g., as provided by a method for obtaining 1500 a CT scan of a patient during a pre-treatment patient immobilization and imaging phase 1000, wherein the method comprises saving the CT scan as a pre-treatment CT scan of the patient); obtaining a treatment CT scan (e.g., as provided by a method for imaging 2300 a patient during a treatment phase 2000, wherein the method comprises saving a CT scan of a patient as a treatment CT scan of the patient); and registering 2410 the treatment CT scan and the pre-treatment CT scan and obtaining a correction vector. Next, methods for verifying correct application of the correction vector comprise providing reference images. In some embodiments, the reference images are provided by a retrieved treatment scene, e.g., as provided by a method or step of retrieving 2450 a saved treatment scene. In some embodiments, the reference images are provided by acquiring images of the patient and PPA or patient support in position for treatment of the patient, e.g., as provided by a method or step of saving 2460 a treatment scene. Further, methods comprise using the correction vector to calculate a ∆X, ∆Y, and/or ∆Z displacement in real space and/or a ∆ψ, ∆φ, and/or ∆θ rotation in real space that is appropriate to align the treatment volume of interest within the patient body to the treatment plan so that radiation treatment is delivered accurately to the treatment volume of interest. ASTO-41250.601 Next, the pixel size per mm relationship at the isocenter plane in actual space (e.g., approximately 1 to 5 pixels/mm (e.g., approximately 1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.0, 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9, 3.0, 3.1, 3.2, 3.3, 3.4, 3.5, 3.6, 3.7, 3.8, 3.9, 4.0, 4.1, 4.2, 4.3, 4.4, 4.5, 4.6, 4.7, 4.8, 4.9, or 5.0 pixels/mm) is used to calculate an offset (e.g., a translation and/or rotation) for the reference images on the display. Methods comprise applying the offset to the reference images to provide offset reference images on the display to indicate the proper position of the patient and PPA or patient support for treatment. Accordingly, the offset reference images provide images of the patient and PPA or patient support in proper position for treatment upon application of the correction vector. Properly applying the correction vector to move the patient and PPA or patient support in real space according to the correction vector will align the live video with the displaced reference images, and thus provide a verification that the correction vector has been applied correctly. Methods for aligning images As described herein, methods comprise aligning a reference image with a live video image, e.g., aligning a reference image of a PPA or patient support configuration with a live video image of a PPA or patient support configuration, aligning a reference image of a patient position with a live video image of a patient position, aligning a reference image of an imaging position with a live video image of an imaging position, and/or aligning a reference image of a location and/or position of the patient and PPA or patient support for treatment of treatment field n with a live video image of a location and/or position of the patient and PPA or patient support for treatment of treatment field n. In some embodiments, methods comprise drawing a reference line or reference mark on a live tracking image that marks a reference point in the room, e.g., the treatment isocenter or imaging position. In some embodiments, methods comprise viewing the patient and/or the PPA or patient support on the live tracking image and adjusting the patient and/or PPA or patient support using the reference line or reference mark provided on the live tracking image that marks the reference point. See, e.g., FIG. 9. Further, while certain embodiments of the technology are described herein for convenience in terms of methods comprising superimposing and aligning a red component of a reference image and the green and blue components of an associated live image provided by a camera, the technology is not limited to such embodiments. The technology also encompasses essentially equivalent embodiments of methods comprising ASTO-41250.601 superimposing and aligning a green component of a reference image and the red and blue components of an associated live image provided by a camera and embodiments comprising superimposing and aligning a blue component of a reference image and the red and green components of an associated live image provided by a camera. In some embodiments, alignment of a live image and a reference image is performed manually by a user interacting with a computer through an input device (e.g., a mouse, keyboard, touch screen, track ball, virtual reality device, etc.) to manipulate (e.g., translate and/or rotate) the live image displayed on the screen and align it with the reference image. A user may use her eye to align the live image and the reference image to determine an adequate match between the live image and the reference image. However, in some embodiments, alignment of a live image and a reference image is performed by image registration methods, e.g., by a Lucas-Kanade image alignment algorithm, a Baker-Dellaert-Matthews image alignment algorithm, or with the OpenCV image alignment package. In some embodiments, images are aligned using an automated feature-based alignment, an automated pixel-based alignment, or a Fast Fourier Transform performed by a method encoded in software and performed by a computer. Additional image registration technologies are provided by Cocianu (2023) “Evolutionary Image Registration: A Review” Sensors 23: 967; Bierbrier (2022) “Estimating medical image registration error and confidence: A taxonomy and scoping review” Medical Image Analysis 81: 102531; John and John (2019) “A Review of Image Registration Methods in Medical Imaging” International Journal of Computer Applications 178: 38–45; and Chen (2021) “Deep Learning in Medical Image Registration” Progress in Medical Imaging 3: 012003, each of which is incorporated herein by reference. In some embodiments, steps of the described image alignment methods are implemented in software code, e.g., a series of procedural steps instructing a computer and/or a microprocessor to produce and/or transform data as described above. In some embodiments, software instructions are encoded in a programming language such as, e.g., BASIC, C, C++, Java, MATLAB, Mathematica, Perl, Python, or R. In some embodiments, one or more steps or components are provided in individual software objects connected in a modular system. In some embodiments, the software objects are extensible and portable. In some embodiments, the objects comprise data structures and operations that transform the object data. In some embodiments, the objects are used by manipulating their data and invoking their methods. Accordingly, embodiments provide software objects that imitate, model, or provide ASTO-41250.601 concrete entities, e.g., for numbers, shapes, data structures, that are manipulable. In some embodiments, software objects are operational in a computer or in a microprocessor. In some embodiments, software objects are stored on a computer readable medium. In some embodiments, a step of a method described herein is provided as an object method. In some embodiments, data and/or a data structure described herein is provided as an object data structure. Embodiments comprise use of code that produces and manipulates software objects, e.g., as encoded using a language such as but not limited to Java, C++, C#, Python, PHP, Ruby, Perl, Object Pascal, Objective-C, Swift, Scala, Common Lisp, and Smalltalk. Movable and configurable patient positioning system In some embodiments, the technology relates to a patient positioning system comprising a movable and configurable patient support. In some embodiments, the technology relates to a patient positioning system comprising a movable and configurable motorized patient support. See U.S. Pat. No. 11,529,109; see U.S. Pat. App. Ser. No.17/894,335 and U.S. Prov. Pat. App. Ser. No. 63/438,978, each of which is incorporated herein by reference. Certain aspects of embodiments of the patient positioning system and patient support technologies are described below. In some embodiments, the patient support is structured to translate in the X, Y, and/or Z directions. In some embodiments, the patient support is structured to rotate around the X, Y, and/or Z axes In some embodiments, the patient support is configured to move with six degrees-of-freedom, e.g., the patient support is structured to translate in the X, Y, and/or Z directions, and the patient support is structured to rotate around the X, Y, and/or Z axes. For example, in some embodiments, the patient support comprises a pivotable base and the patient support is structured to pivot around X, Y, and/or Z axes to provide pitch, roll, and yaw rotations. Accordingly, in embodiments comprising a pivotable base, the configurable patient support is structured to tilt or pivot relative to a horizontal plane of the translatable member or any other fixed horizontal surface. Embodiments comprise motors and drive mechanisms engaged with the patient positioning system and/or with the patient support to translate and/or rotate the patient positioning system and/or to translate and/or rotate the patient support. ASTO-41250.601 As a further example, in some embodiments, the patient positioning system comprises a translatable member that is vertically translatable such that the translatable member articulates towards and away from a surface on which the patient positioning system is supported. In some embodiments, the translatable member is mounted to a supporting structure that is in turn mounted to the surface. In some embodiments, the supporting structure provides stability to the patient positioning system and houses a drive mechanism to effect the vertical movement of the translatable member. The patient support is configured to receive and secure a patient in a generally upright position. In some embodiments, the patient support is rotatably mounted to the translatable member such that the patient support is rotatable about a vertical axis (e.g., an essentially vertical and/or substantially vertical axis) relative to the translatable member. In some embodiments, a lower end of the patient support is mounted to a rotating disc. In some embodiments, an upper end of the patient support is mounted to another rotating disc. By this arrangement, the patient support is rotatably mounted to the translatable member such that the patient support is rotatable about a vertical axis. Also, as the translatable member is able to articulate vertically, the patient support mounted to the translatable member may similarly articulate vertically. Further, in some embodiments, the patient support is translatable in a horizontal (e.g., XY) plane, e.g., in addition to being rotatable about a vertical axis. In some embodiments, the patient support is translatable in a horizontal plane orthogonal to the vertical axis of rotation. In some embodiments, the patient support comprises two pairs of parallel rails in orthogonal relation, the patient support being slidably connected to a first pair of rails for translation in a first orthogonal direction and the first set of rails being slidably connected to a second pair of rails for translation in a second orthogonal direction. In some embodiments, motors and drive mechanisms are engaged with each set of rails to translate the patient support in the X and Y directions. In some embodiments, the patient support comprises a back rest, a seat pan, a shin rest, an arm rest, a head rest, and/or foot braces or heel stops. Embodiments provide that the back rest, seat pan, shin rest, arm rest, a head rest, and/or foot braces or heel stops is/are configurable among a number of positions to accommodate patient ingress and/or egress from the patient support system (e.g., from the patient support assembly) and/or to support a patient in a number of positions for imaging or treatment. FIG. 4 shows a configurable patient support 100 comprising one or more configurable and movable components, e.g., a back rest 110 (e.g., a configurable and movable back rest), an arm rest 170 (e.g., a configurable and movable arm rest), a seat pan 140 (e.g., a ASTO-41250.601 configurable and movable seat pan), a shin rest 150 (e.g., a configurable and movable shin rest), and/or a foot brace (e.g., a configurable and movable foot brace) or a heel stop 160 (e.g., a configurable and movable heel stop). In some embodiments, the patient support (e.g., integrated patient support or non-integrated patient support) further comprises a head rest (e.g., a configurable and movable head rest). In some embodiments, each component (e.g., back rest 110, arm rest 170, seat pan 140, shin rest 150, foot brace or heel stop 160, and head rest) is manipulable by a human user to place the component in the appropriate position for the desired configuration. In some embodiments, each component (e.g., back rest 110, arm rest 170, seat pan 140, shin rest 150, foot brace or heel stop 160, and head rest) may be moved (e.g., translated and/or rotated) by a human applying force to the component using her hands and no more than typical force provided by an average human. In some embodiments, the patient support 100 comprises one or more motorized components, e.g., a motorized back rest (e.g., a back rest 110 operatively engaged with a back rest motor), a motorized head rest (e.g., a head rest operatively engaged with a head rest motor), a motorized arm rest (e.g., an arm rest 170 operatively engaged with an arm rest motor), a motorized seat pan (e.g., a seat pan 140 operatively engaged with a seat pan motor), a motorized shin rest (e.g., a shin rest 150 operatively engaged with a shin rest motor), and/or a motorized foot brace (e.g., a foot brace operatively engaged with a foot brace motor) or a motorized heel stop (e.g., a heel stop 160 operatively engaged with a heel stop motor). Accordingly, the back rest motor is structured to move (e.g., translate and/or rotate) the back rest 110, the head rest motor is structured to move (e.g., translate and/or rotate) the head rest, the arm rest motor is structured to move (e.g., translate and/or rotate) the arm rest 170, the seat pan motor is structured to move (e.g., translate and/or rotate) the seam member 140, the shin rest motor is structured to move (e.g., translate and/or rotate) the shin rest 150, and/or the foot brace motor or heel stop motor is structured to move (e.g., translate and/or rotate) the foot brace or the heel stop motor is structure to move (e.g., translate and/or rotate) the heel stop. In some embodiments, the OGTS provides a component (e.g., a computer, a microcontroller, and/or a microprocessor) configured to coordinate control and/or movement (e.g., translation) of the patient support in the X, Y, and/or Z directions. In some embodiments, the OGTS provides a component (e.g., a computer, a microcontroller, and/or a microprocessor) configured to coordinate control and/or movement (e.g., rotation) of the patient support around the X, Y, and/or Z axes. In some embodiments, ASTO-41250.601 the OGTS provides a component (e.g., a computer, a microcontroller, and/or a microprocessor) configured to coordinate control and/or movement (e.g., translation) of the patient support in the X, Y, and/or Z directions and configured to coordinate control and/or movement (e.g., rotation) of the patient support around the X, Y, and/or Z axes. In some embodiments, the OGTS provides a component (e.g., a computer, a microcontroller, and/or a microprocessor) configured to coordinate control and/or movement of one or more of the motorized components, e.g., the motorized back rest, the motorized head rest, the motorized arm rest, the motorized seat pan, the motorized shin rest, and/or the motorized foot brace or motorized heel stop, to provide the patient support into one or more specific configurations comprising the motorized back rest, the motorized head rest, the motorized arm rest, the motorized seat pan, the motorized shin rest, and/or the motorized foot brace or motorized heel stop in specified positions. Coordinating control and/or movement (e.g., translation and/or rotation) of the patient support or one or more configurable component of the patient support comprises providing or removing a current or voltage from a power supply to a motor engaged with the patient support and/or one or more configurable component of the patient support to move the patient support and/or one or more configurable component of the patient support to the appropriate position. Upon determining one or more of ∆X, ∆Y, ∆Z, ∆ψ, ∆φ, and/or ∆θ as described above, the patient positioning system, patient support, and/or one or more of the back rest, the seat pan, the shin rest, the arm rest, the head rest, and/or the foot braces or the heel stops are translated and/or rotated as appropriate to move the patient support and/or the patient to the appropriate position for imaging or treatment. In some embodiments, moving (e.g., translating and/or rotating) the patient positioning system, patient support, and/or one or more of the back rest, the seat pan, the shin rest, the arm rest, the head rest, and/or the foot braces or the heel stops is performed by motors and drive mechanisms (e.g., by providing current and/or voltage to one or more motors) engaged with the patient positioning system, patient support, and/or one or more of the back rest, the seat pan, the shin rest, the arm rest, the head rest, and/or the foot braces or the heel stops. In some embodiments, moving (e.g., translating and/or rotating) the patient positioning system, patient support, and/or one or more of the back rest, the seat pan, the shin rest, the arm rest, the head rest, and/or the foot braces or the heel stops is performed by a user engaging with and moving the patient positioning system, patient support, and/or one or more of the back rest, the seat pan, the shin rest, the arm rest, the head rest, and/or the foot braces or the heel stops. ASTO-41250.601 Although the disclosure herein refers to certain illustrated embodiments, it is to be understood that these embodiments are presented by way of example and not by way of limitation. Examples Example 1 – Optical guidance and tracking system An OGTS is provided comprising three cameras as shown in the schematic drawing provided by FIG.3. Camera 1 (“front”) and Camera 2 (“lateral”) are mounted in the horizontal plane (XY plane) and 90 degrees apart. A patient is placed to face Camera 1. Translational disagreements in the left-right (X) and up-down (Z) directions between the live image and reference images and/or rotational disagreements about the Camera 1 axis (Y) between the live image and reference images are seen on Camera 1. Camera 2 is provided at a 90-degree angle to Camera 1, and Camera 2 thus provides a lateral view of the patient. Translational disagreements in the forward- backward (Y) and up-down (Z) directions between the live image and reference images and/or rotational disagreements about the Camera 2 axis (X) between the live image and reference images are seen on Camera 2. Camera 3 is provided at a position that is orthogonal both to Camera 1 and to Camera 2. Accordingly, Camera 3 is provided at a position that is on a line normal to the horizontal (XY) plane (i.e., above or below the patient). In practical terms, Camera 3 is positioned above the patient and provides an overhead view of the patient. Translational disagreements in the forward-backward (Y) and left-right (X) directions between the live image and reference images and/or rotational disagreements about the Camera 3 axis (Z) between the live image and reference images are seen on Camera 3. Assuming the patient has only been translated in space, aligning the live images and the reference images in each camera view provides the ∆X, ∆Y, and/or ∆Z displacement in real space that is appropriate to position the object (e.g., patient) at the same position where the object (e.g., patient) was positioned when the reference images were obtained and saved. Furthermore, rotations in space can be calculated by analyzing the differences between live camera views and saved images from the three cameras. Example 2 – Upright imaging and positioning system The technology relates to an imaging system comprising an upright patient positioning apparatus or a patient positioner of a patient positioning system (see, e.g., FIG. 1A and ASTO-41250.601 FIG. 1B) and an upright helical CT scanner (see, e.g., FIG.2A to 2D). See, e.g., U.S. Pat. App. Pub. No.2022/0183641, MULTI-AXIS MEDICAL IMAGING (U.S. Pat. App. Ser. No.17/535,091), which is explicitly incorporated herein by reference. The upright patient positioning apparatus or a patient positioner of a patient positioning system provides for positioning a patient in a seated or perched (semi-standing) position while the CT acquires diagnostic quality CT images of the patient in the treatment orientation. See FIG. 5A. The beam delivery system is typically behind the back wall and comprises a high energy x-ray or electron beam delivery system or a particle therapy beam delivery system. The upright imaging and positioning system allows for installing an OGTS comprising five cameras. A typical camera installation in a treatment room is shown in FIG. 5B. Cameras 1, 2, 4, and 5 are installed in the horizontal plane, and Camera 3 is installed directly above the isocenter. As discussed above, cameras 1, 2, and 3 are orthogonal to each other. The technology is not limited by this arrangement. For example, the technology also contemplates embodiments in which cameras 4 and 5 are also orthogonal to each other and each is orthogonal to camera 3. In the embodiment shown in FIG. 5B, Camera 1 and Camera 2 are closest to the room entrance and are used together with Camera 3 for capturing the reference images of the patient in the setup position and for verifying the patient in the setup position during subsequent imaging and treatment procedures. Camera 4 and Camera 5 are used to verify and track the patient position during imaging and in various treatment fields attained by rotating the patient about the vertical (Z) axis. FIG. 5C shows a design of an embodiment of an OGTS system treatment room 710, control room 720, and technical room 730. A view of section A showing the treatment room 710 and a view of section B showing the control room 720 and technical room 730 are shown in FIG.5D and FIG.5E, respectively. The treatment room 710 comprises an OGTS, and the OGTS comprises an overhead camera E (235) and four peripheral cameras A, B, C, and D (231, 232, 233, and 234) in exemplary but not limiting positions. The treatment room further comprises a patient positioning system. A technician 901 is shown in the treatment room 710. One inset shows a keyboard, mouse, and display in the treatment room 710 for use by a technician 901, e.g., to control the OGTS and perform the methods described herein; and a second inset shows a keyboard, mouse, and display in the control room 720, e.g., to control the OGTS and perform the methods described herein. The treatment room and control room may each comprise a desk or shelf, a network socket, and a single phase power outlet. The ASTO-41250.601 technical room may comprise computers for data analysis, data storage, computing power, system diagnostics, and/or other computational support for the OGTS system. The technical room comprises a number of network sockets for connecting to components of the OGTS in the treatment room and in the control room. In some embodiments, the four peripheral cameras A, B, C, and D (231, 232, 233, and 234) are at a height that is the height of the treatment room isocenter, e.g., with at tolerance of ± 50 mm. In some embodiments, each of the four peripheral cameras A, B, C, and D (231, 232, 233, and 234) is positioned from 2300 mm to 6800 mm from the treatment room isocenter. In some embodiments, the cameras have a lens range of 2300–3400 mm, 3200–4800 mm, or 4600–6800 mm to show the field of view at 80% to 120%. In some embodiments, the OGTS uses 20.2-megapixel cameras (e.g., SVS Vistek EXO 183 TR, 1” sensor format) with a 5,496 × 3,672 pixel resolution and high quality lenses. In some embodiments, the cameras have an exemplary focal length of 25 mm, 35 mm, or 50 mm. The cameras are located at a defined distance from the treatment room isocenter, and the lenses are selected according to the distance between the camera and the treatment room isocenter to attain a field of view at the isocenter plane that is approximately 1.5 m (vertical) × 1.0 m (horizontal). Thus, in some embodiments, the ratio between a real world distance and a distance in pixels (e.g., distance in pixels across the camera sensor and/or distance in pixels across an image produced by this camera) is approximately 1500 mm / 5496 pixels = 0.273 mm / pixel. This is equivalent to a distance of approximately 3.7 pixels across the sensor per real world distance or translation of 1 mm. Or, in other words, one pixel in the image recorded by the camera represents approximately 300 µm of distance in the real world. All cameras are connected to a central computer (the “host computer”) using dedicated Ethernet ports and fixed IP addresses for each camera. The host computer processes the images and communicates to an unlimited number of client computers that provides the graphical user interfaces to the users. One client computer may serve as a master client that is used to start new OGTS sessions and stop open sessions on all clients. All other clients may be used to monitor OGTS activities. The graphic user interface (GUI) of the OGTS software as displayed on the client computers is shown in FIG.6. The camera configurations, settings, operational parameters, and/or camera calibrations (e.g., conversions of pixels to real world distances) are set on the host GUI and are described in the OGTS user manual. The GUI allows for selecting two to four cameras installed in the horizontal plane to be displayed ASTO-41250.601 in the left and right windows. FIG.6. The top camera is always selected and is displayed in the top middle window. As shown in FIG.6, Camera 1 and Camera are selected – Camera 1 is facing the patient positioning apparatus or patient positioner and Camera 2 shows the patient positioning apparatus or patient positioner from the left side. Camera 3 is above the patient positioning apparatus or patient positioner. At the left side of the image shown in FIG.6, software control buttons are shown that are used for a user to start and stop tracking, to start tracking with new reference images, and to save the current viewing configuration as a new recorded scene. The camera images shown FIG.6 are the images provided by the cameras when the cameras are fully zoomed out to provide the full field of view for each camera. These fields of view may be used to capture and verify information describing the configuration of the patient positioning apparatus or patient positioner, e.g., the initial position settings of the patient positioning apparatus or patient positioner before the patient is placed on the patient positioning apparatus or patient positioner and the patient is immobilized in the appropriate posture for imaging or treatment. The initial position settings of the patient positioning apparatus or patient positioner may include information describing the position of the seat pan, the foot rest or heel stop, the shin rest, the back rest, and/or the arm rests. A region of interest (ROI) may be selected in each camera view to zoom in on a specific region in the image (e.g., by selecting a subset of the camera sensor array to display as an image). The ROI may be used to zoom in on a specific region of interest of the patient typically during treatment of the patient. Once a ROI is selected for a specific camera, the camera sends the information to the host computer, which provides a fast data transfer rate and hence a faster monitoring repetition rate. A recorded scene contains information identifying the specific viewing environment (e.g., the selected cameras and the region of interest for each camera). FIG. 7 shows a person placed in the patient positioning apparatus or patient positioner with the cameras zoomed in according to different ROIs for each camera. The zoomed-out views of the patient (see FIG. 6) are used to capture the patient posture immobilization devices and the zoomed in or specific ROI views are used to focus on specific regions of interest (e.g., the anatomical region to be treated). The zoomed in views provide higher frame rates from the cameras because the cameras send less information over the network. The OGTS may record and save scenes for use later. Once a patient is in a posture at a particular location, the configuration of the OGTS (e.g., the selected cameras and ROIs) may be saved as a setup scene or a treatment scene using the ASTO-41250.601 software controls in the left panel on the GUI. The camera images of the patient and/or patient positioning apparatus or patient positioner are saved as reference images. The scene and reference images may be retrieved to reproduce the patient posture at a subsequent time. Scene selection panels are shown in FIG. 8. Thumbnails of the real images that comprise a scene are shown in a selection dialogue. The left panel shows the setup scenes that were recorded; the right panel show some of the treatment scenes that were recorded. Any of these scenes may be selected during the workflow. Setup scenes are used at the initial stages of the workflow and typically use zoomed out views to provide more information. Treatment scenes are used at later stages in the workflow and use views that are zoomed in. Selecting a scene is illustrated in the right panel of FIG.8. The result of selecting the scene named “treat 7” is illustrated in FIG.9. Selecting a scene automatically initiates a tracking routine. When in tracking mode, the OGTS software displays the sum of the red component of the RGB reference image and the green and blue components of the live RGB image in each of the camera views. When the reference image and the live image are aligned, the red and green colors in the images disappear because all portions of the RGB images are in alignment to provide correctly colored RGB image (at least in the region where the images align). As can be seen in the left and top panels of the GUI shown in FIG. 9, the patient head was slightly rotated to the left about the vertical axis resulting in a significant mismatch towards the front of his face. The lateral alignment remained reasonable. The OGTS is used to calculate the misalignment between the reference images and the images in the live scenario. The cameras are calibrated to provide a defined pixel size per mm relationship at the isocenter plane in actual space. For example, as discussed above, cameras used during the development of embodiments of the technology described herein have a ratio of real world distance to camera sensor pixels and/or image pixels of approximately 0.2723 mm/pixel in the horizontal direction and approximately 0.2729 mm/pixel in the vertical direction (e.g., approximately 0.3 mm/pixel), which is equivalent to 3.672 pixels/mm in the horizontal direction and 3.6620 pixels/mm in the vertical direction (e.g., approximately 3.7 pixels/mm). Thus, a live image that is misaligned by, e.g., 100 pixels relative to a reference image indicates that the patient should be translated in space by approximately 30 mm in the appropriate plane imaged by the camera to reproduce the set-up position recorded in the reference image. ASTO-41250.601 Thus, the ∆X, ∆Y, and/or ∆Z displacement appropriate to bring the patient to the position in real space recorded by the reference images can be obtained by aligning the live and reference images in each camera view, determining the distance in image pixels needed to align the live and reference images, and calculating the ∆X, ∆Y, and/or ∆Z displacement according to the pixel size per mm relationship. Furthermore, the OGTS software also allows for rotating images to determine ∆ψ, ∆φ, and ∆θ to correct for rotations of the patient about the X, Y, and Z axes relative to the reference images previously captured. All publications and patents mentioned in the above specification are herein incorporated by reference in their entirety for all purposes. Various modifications and variations of the described compositions, methods, and uses of the technology will be apparent to those skilled in the art without departing from the scope and spirit of the technology as described. Although the technology has been described in connection with specific exemplary embodiments, it should be understood that the invention as claimed should not be unduly limited to such specific embodiments. Indeed, various modifications of the described modes for carrying out the invention that are obvious to those skilled in the art are intended to be within the scope of the following claims.

Claims

ASTO-41250.601 CLAIMS WE CLAIM: 1. An optical guidance and tracking system (OGTS) comprising: an overhead camera; and a first peripheral camera, wherein a field of view of the overhead camera is orthogonal to a field of view of the first peripheral camera. 2. The OGTS of claim 1, further comprising a second peripheral camera, wherein a field of view of the second peripheral camera is orthogonal to the field of view of the overhead camera; and the field of view of the second peripheral camera is orthogonal to the field of view of the first peripheral camera. 3. The OGTS of claim 2, further comprising a third peripheral camera, wherein the fields of view of any two of the peripheral cameras and the overhead camera are all mutually orthogonal. 4. The OGTS of claim 3, further comprising a fourth peripheral camera, wherein the fields of view of any two of the peripheral cameras and the overhead camera are all mutually orthogonal. 5. The OGTS of claim 1, further comprising a patient support. 6. The OGTS of claim 5, wherein the patient support rotates around a vertical (Z) axis. 7. The OGTS of claim 6, wherein the field of view of the overhead camera is aligned with the vertical (Z) axis. 8. The OGTS of claim 1, further comprising a radiation therapy apparatus. 9. The OGTS of claim 8, wherein the radiation therapy apparatus comprises a static source. ASTO-41250.601 10. The OGTS of claim 1, further comprising a computerized tomography (CT) scanner. 11. The OGTS of claim 10, wherein the overhead camera provides a view through a bore of a scanner ring of the CT scanner. 12. The OGTS of claim 1, wherein the overhead camera comprises a color sensor array and the first peripheral camera comprises a color sensor array. 13. The OGTS of claim 1, further comprising a processor and a non-transitory computer-readable medium. 14. The OGTS of claim 13, wherein the non-transitory computer-readable medium comprises a program and the processor executes the program to acquire color images from the overhead camera and to acquire images from the peripheral camera. 15. The OGTS of claim 13, further comprising a display. 16. The OGTS of claim 15, wherein the non-transitory computer-readable medium comprises a program and the processor executes the program to superimpose a live video over a reference image on the display. 17. The OGTS of claim 16, wherein the non-transitory computer-readable medium comprises a program and the processor executes the program to provide a graphical user interface on the display. 18. The OGTS of claim 17, wherein a user interacts with the graphical user interface to identify a region of interest of a camera view. 19. The OGTS of claim 17, wherein a user interacts with the graphical user interface to align the live video and the reference image on the display. ASTO-41250.601 20. The OGTS of claim 19, wherein the processor calculates an adjustment in real space to position a patient properly for a treatment. 21. The OGTS of claim 13, further comprising a database comprising a saved scene. 22. The OGTS of claim 21, wherein the saved scene comprises an image, information identifying a camera that provided the image, and a region of interest for the image. 23. The OGTS of claim 1, wherein the first peripheral camera is located on a major Y axis of the OGTS. 24. The OGTS of claim 2, wherein the first peripheral camera is located on a major Y axis of the OGTS and the second peripheral camera is located on a major X axis of the OGTS. 25. The OGTS of claim 1, wherein the overhead camera is located on a major Z axis of the OGTS. 26. A method for positioning a patient, the method comprising obtaining a first reference image of a patient support and/or a patient; superimposing a first live image of a patient support and/or a patient over the reference image; aligning the first live image and the first reference image to determine a displacement; and moving the patient support and/or the patient according to the displacement. 27. The method of claim 26, wherein the first reference image was provided by a first camera and the first live image is provided by the first camera. 28. The method of claim 26, further comprising obtaining a second reference image of the patient support and/or the patient; and superimposing a second live image of the patient support and/or the patient over the second reference image. ASTO-41250.601 29. The method of claim 28, wherein the second reference image was provided by a second camera and the second live image is provided by the second camera; and wherein a field of view of the second camera is orthogonal to a field of view of the first camera. 30. The method of claim 26, wherein the first camera is an overhead camera. 31. The method of claim 26, wherein the first camera is a peripheral camera. 32. The method of claim 26, wherein aligning the first live image and the first reference image comprises a user interacting with a graphical user interface to align the first live image and the first reference image. 33. The method of claim 26, wherein aligning the first live image and the first reference image comprises using an image alignment software to align the first live image and the first reference image. 34. The method of claim 26, wherein a saved scene comprises the first reference image. 35. The method of claim 34, wherein the saved scene comprises the first reference image, information identifying a camera that provided the first reference image, and a region of interest for the first reference image. 36. The method of claim 26, wherein the displacement comprises a translation in the X, Y, and/or Z directions and/or a rotation around the X, Y, and/or Z axes. 37. The method of claim 26, further comprising determining a relationship between the pixel size of the first camera to a distance in real space. 38. The method of claim 26, further comprising contacting the patient with radiation. 39. The method of claim 26, further comprising imaging the patient using computerized tomography. ASTO-41250.601 40. A method for positioning a patient, the method comprising obtaining a first reference image of a patient support and/or a patient; superimposing a first live image of a patient support and/or a patient over the reference image; displacing the reference image according to a correction vector; applying the correction vector to the patient support and/or the patient; and verifying correct application of the correction vector using alignment of the first live image and the first reference image. 41. The method of claim 40, wherein application of the correction vector is correct when the first live image and the first reference image are substantially, maximally, or essentially aligned.
PCT/US2024/026297 2023-04-28 2024-04-25 Optical guidance and tracking for medical imaging WO2024226818A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202363462563P 2023-04-28 2023-04-28
US63/462,563 2023-04-28

Publications (1)

Publication Number Publication Date
WO2024226818A1 true WO2024226818A1 (en) 2024-10-31

Family

ID=93257306

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2024/026297 WO2024226818A1 (en) 2023-04-28 2024-04-25 Optical guidance and tracking for medical imaging

Country Status (1)

Country Link
WO (1) WO2024226818A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160250501A1 (en) * 2008-05-22 2016-09-01 Vladimir Balakin Charged particle treatment, rapid patient positioning apparatus and method of use thereof
US20200121267A1 (en) * 2018-10-18 2020-04-23 medPhoton GmbH Mobile imaging ring system
US20200268327A1 (en) * 2017-09-21 2020-08-27 Asto CT, Inc. Patient positioning apparatus
US20210029307A1 (en) * 2017-08-16 2021-01-28 Covidien Lp Synthesizing spatially-aware transitions between multiple camera viewpoints during minimally invasive surgery
US20210192759A1 (en) * 2018-01-29 2021-06-24 Philipp K. Lang Augmented Reality Guidance for Orthopedic and Other Surgical Procedures
US20230054394A1 (en) * 2018-09-21 2023-02-23 Immersivetouch, Inc. Device and system for multidimensional data visualization and interaction in an augmented reality virtual reality or mixed reality image guided surgery

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160250501A1 (en) * 2008-05-22 2016-09-01 Vladimir Balakin Charged particle treatment, rapid patient positioning apparatus and method of use thereof
US20210029307A1 (en) * 2017-08-16 2021-01-28 Covidien Lp Synthesizing spatially-aware transitions between multiple camera viewpoints during minimally invasive surgery
US20200268327A1 (en) * 2017-09-21 2020-08-27 Asto CT, Inc. Patient positioning apparatus
US20210192759A1 (en) * 2018-01-29 2021-06-24 Philipp K. Lang Augmented Reality Guidance for Orthopedic and Other Surgical Procedures
US20230054394A1 (en) * 2018-09-21 2023-02-23 Immersivetouch, Inc. Device and system for multidimensional data visualization and interaction in an augmented reality virtual reality or mixed reality image guided surgery
US20200121267A1 (en) * 2018-10-18 2020-04-23 medPhoton GmbH Mobile imaging ring system

Similar Documents

Publication Publication Date Title
CN103229178B (en) Method and apparatus for treating a target's partial motion range
JP7098485B2 (en) Virtual alignment image used for imaging
JP5606065B2 (en) Parallel stereoscopic geometry in image-guided radiosurgery
JP2022110067A (en) Method and system for patient scan setup
US8849633B2 (en) Method and apparatus for selecting a tracking method to use in image guided treatment
US8147139B2 (en) Dynamic biplane roentgen stereophotogrammetric analysis
JP4782680B2 (en) Calibration image alignment apparatus and method in PET-CT system
CN110381840A (en) Use rotation 2Dx ray imager as imaging device to carry out target tracking during radiation disposition delivering
CN106469253B (en) Device and method for display position information
CN108883294A (en) System and method for monitoring structure motion in entire radiotherapy
WO2012042969A1 (en) Radiation therapy device control device and radiation therapy device control method
US10049465B2 (en) Systems and methods for multi-modality imaging component alignment
CN110381838A (en) Use disposition target Sport Administration between the gradation of the view without view of volume imagery
CN111882510A (en) Projection method, image processing method and device for CTA three-dimensional reconstruction mirror data
JP2012045163A (en) Device for controlling radiation therapy apparatus and method for controlling radiation therapy apparatus
CN110267597B (en) Localization and computed tomography of the anatomy desired to be imaged
US12324691B2 (en) Cone beam computed tomography centering with augmented reality
JP7078955B2 (en) Treatment system, calibration method, and program
CN116018181A (en) Systems and methods for dynamic multi-leaf collimator tracking
WO2024226818A1 (en) Optical guidance and tracking for medical imaging
CN114569146B (en) Medical image processing method, device, computer equipment and storage medium
Albiol et al. 3D measurements from X-ray images and dense surface mappings
Zhang et al. A Windows GUI application for real-time image guidance during motion-managed proton beam therapy
CN111050647B (en) Radiographic apparatus
WO2022073744A1 (en) System and method for automated patient and phantom positioning for nuclear medicine imaging

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24797963

Country of ref document: EP

Kind code of ref document: A1