Yahoo Suche Web Suche

  1. Vergleichen Sie Die Preise Für Tickets Auf Dem Größten Marktplatz Für Ticket Der Welt. Tickets Heute Reduziert. Sichern Sie Sich Jetzt Ihre Sitzplätze. Deutschland Tickets 2024

Suchergebnisse

  1. Suchergebnisse:
  1. www.zdf.de › filme › spielfilm-highlightsSugar - ZDFmediathek

    29. Juni 2024 · Wie so oft am Wochenende steht die junge Verkäuferin Melanie (Jasmine Sky Sarin) in ihrer Heimatstadt Montreal vor einem angesagten Klub und kommt nicht rein. Mithilfe eines Tricks gelingt ihr ...

  2. en.wikipedia.org › wiki › SugarSugar - Wikipedia

    Sugar is the generic name for sweet-tasting, soluble carbohydrates, many of which are used in food. Simple sugars, also called monosaccharides, include glucose, fructose, and galactose.

    • SuGaR: Surface-Aligned Gaussian Splatting for Efficient 3D Mesh Reconstruction and High-Quality Mesh Rendering
    • Abstract
    • Updates and To-do list
    • Overview
    • Installation
    • Quick Start
    • Installing and using the real-time viewer
    • Tips for using SuGaR on your own data and obtain better reconstructions
    • Rendering, composition and animation
    • Evaluation

    Antoine Guédon Vincent Lepetit

    LIGM, Ecole des Ponts, Univ Gustave Eiffel, CNRS

    | Webpage | arXiv | Presentation video | Viewer video |

    Our method extracts meshes from 3D Gaussian Splatting reconstructions and builds hybrid representations

    We propose a method to allow precise and extremely fast mesh extraction from 3D Gaussian Splatting (SIGGRAPH 2023). Gaussian Splatting has recently become very popular as it yields realistic rendering while being significantly faster to train than NeRFs. It is however challenging to extract a mesh from the millions of tiny 3D Gaussians as these Gaussians tend to be unorganized after optimization and no method has been proposed so far. Our first key contribution is a regularization term that encourages the 3D Gaussians to align well with the surface of the scene. We then introduce a method that exploits this alignment to sample points on the real surface of the scene and extract a mesh from the Gaussians using Poisson reconstruction, which is fast, scalable, and preserves details, in contrast to the Marching Cubes algorithm usually applied to extract meshes from Neural SDFs. Finally, we introduce an optional refinement strategy that binds Gaussians to the surface of the mesh, and jointly optimizes these Gaussians and the mesh through Gaussian splatting rendering. This enables easy editing, sculpting, rigging, animating, or relighting of the Gaussians using traditional softwares (Blender, Unity, Unreal Engine, etc.) by manipulating the mesh instead of the Gaussians themselves. Retrieving such an editable mesh for realistic rendering is done within minutes with our method, compared to hours with the state-of-the-art method on neural SDFs, while providing a better rendering quality in terms of PSNR, SSIM and LPIPS.

    Hybrid representation (Mesh + Gaussians on the surface)

    Updates

    •[01/09/2024] Added a dedicated, real-time viewer to let users visualize and navigate in the reconstructed scenes (hybrid representation, textured mesh and wireframe mesh). •[12/20/2023] Added a short notebook showing how to render images with the hybrid representation using the Gaussian Splatting rasterizer. •[12/18/2023] Code release.

    To-do list

    •Viewer: Add option to load the postprocessed mesh. •Mesh extraction: Add the possibility to edit the extent of the background bounding box. •Tips&Tricks: Add to the README.md file (and the webpage) some tips and tricks for using SuGaR on your own data and obtain better reconstructions (see the tips provided by user kitmallet). •Improvement: Add an if block to sugar_extractors/coarse_mesh.py to skip foreground mesh reconstruction and avoid triggering an error if no surface point is detected inside the foreground bounding box. This can be useful for users that want to reconstruct "background scenes". •Using precomputed masks with SuGaR: Add a mask functionality to the SuGaR optimization, to allow the user to mask out some pixels in the training images (like white backgrounds in synthetic datasets). •Using SuGaR with Windows: Adapt the code to make it compatible with Windows. Due to path-writing conventions, the current code is not compatible with Windows. •Synthetic datasets: Add the possibility to use the NeRF synthetic dataset (which has a different format than COLMAP scenes) •Composition and animation: Finish to clean the code for composition and animation, and add it to the sugar_scene/sugar_compositor.py script. •Composition and animation: Make a tutorial on how to use the scripts in the blender directory and the sugar_scene/sugar_compositor.py class to import composition and animation data into PyTorch and apply it to the SuGaR hybrid representation.

    As we explain in the paper, SuGaR optimization starts with first optimizing a 3D Gaussian Splatting model for 7k iterations with no additional regularization term. In this sense, SuGaR is a method that can be applied on top of any 3D Gaussian Splatting model, and a Gaussian Splatting model optimized for 7k iterations must be provided to SuGaR.

    Consequently, the current implementation contains a version of the original 3D Gaussian Splatting code, and we built our model as a wrapper of a vanilla 3D Gaussian Splatting model. Please note that, even though this wrapper implementation is convenient for many reasons, it may not be the most optimal one for memory usage, so we might change it in the future.

    After optimizing a vanilla Gaussian Splatting model, the SuGaR pipeline consists of 3 main steps, and an optional one:

    1.SuGaR optimization: optimizing Gaussians alignment with the surface of the scene

    2.Mesh extraction: extracting a mesh from the optimized Gaussians

    3.SuGaR refinement: refining the Gaussians and the mesh together to build a hybrid representation

    0. Requirements

    The software requirements are the following: •Conda (recommended for easy setup) •C++ Compiler for PyTorch extensions •CUDA toolkit 11.8 for PyTorch extensions •C++ Compiler and CUDA SDK must be compatible Please refer to the original 3D Gaussian Splatting repository for more details about requirements.

    1. Clone the repository

    Start by cloning this repository: or

    2. Install the required Python packages

    To install the required Python packages and activate the environment, go inside the SuGaR/ directory and run the following commands:

    Start by optimizing a vanilla Gaussian Splatting model for 7k iterations by running the script gaussian_splatting/train.py, as shown below. Please refer to the original 3D Gaussian Splatting repository for more details. This optimization should be very fast, and last only a few minutes.

    Then, run the script train.py in the root directory to optimize a SuGaR model.

    The most important arguments for the train.py script are the following:

    We provide more details about the two regularization methods "density" and "sdf" in the next section. For reconstructing detailed objects centered in the scene with 360° coverage, "density" provides a better foreground mesh. For a stronger regularization and a better balance between foreground and background, choose "sdf".

    The default configuration is high_poly with refinement_time set to "long". Results are saved in the output/ directory.

    As we explain in the paper, this script extracts a mesh in 30~35 minutes on average on a single GPU. After mesh extraction, the refinement time only takes a few minutes when using --refinement_time "short", but can take up to an hour when using --refinement_time "long". A short refinement time is enough to produce a good-looking hybrid representation in most cases.

    1. Installation

    The viewer is currently built for Linux and Mac OS. It is not compatible with Windows. For Windows users, we recommend to use WSL2 (Windows Subsystem for Linux), as it is very easy to install and use. Please refer to the official documentation for more details. We thank Mark Kellogg for his awesome 3D Gaussian Splatting implementation for Three.js, which we used for building this viewer. Please start by installing the latest versions of Node.js (such as 21.x) and npm. A simple way to do this is to run the following commands (using aptitude): Then, go inside the ./sugar_viewer/ directory and run the following commands:

    2. Usage

    First, make sure you have exported a .ply file and an .obj file using the train.py script. The .ply file contains the refined 3D Gaussians, and the .obj file contains the textured mesh. These files are exported by default when running the train.py script, so if you ran the code with default values for --export_ply and --export_uv_textured_mesh, you should be good to go. The ply file should be located in ./output/refined_ply/ /. Then, just run the following command in the root directory to start the viewer: Please make sure your .ply file is located in the right folder, and use a relative path starting with ./output/refined_ply. This command will redirect you to a local URL. Click on the link to open the viewer in your browser. Click the icons on the top right to switch between the different representations (hybrid representation, textured mesh, wireframe mesh). Use the mouse to rotate the scene, and the mouse wheel to zoom in and out.

    1. Capture images or videos that cover the entire surface of the scene

    Using a smartphone or a camera, capture images or a video that cover the entire surface of the 3D scene you want to reconstruct. The easiest way to do this is to move around the scene while recording a video. Try to move slowly and smoothly in order to avoid motion blur. For consistent reconstruction and easier camera pose estimation with COLMAP, maintaining a uniform focal length and a constant exposure time is also important. We recommend to disable auto-focus on your smartphone to ensure that the focal length remains constant. For better reconstructions, try to cover objects from several and different angles, especially for thin and detailed parts of the scene. Indeed, SuGaR is able to reconstruct very thin and detailed objects, but some artifacts may appear if these thin objects are not covered enough and are visible only from one side in the training images.

    Detailed explanation

    SuGaR applies Poisson reconstruction with 3D points sampled on the parts of the surface that are visible in the training images. This visibility constraint is important to prevent sampling points on the backside of the Gaussian level sets, located behind the surface of the scene, which would produce a lot of self-collisions and many unnecessary vertices in the mesh after applying Poisson reconstruction. However, this visibility constraint also means that SuGaR cannot reconstruct parts of the surface that are not visible in the training images. If thin objects are visible only from one side in the training images, the Poisson reconstruction will try to reconstruct a closed surface, and will extend the surface of the thin objects, which produces an inaccurate mesh. TODO: Add images illustrating such artifacts. However, such artifacts are not visible in the hybrid representation, because the gaussian texturing gives low-opacity to these artifacts during refinement. We already have simple ideas that could help to avoid such artifacts, such as (a) identifying new camera poses that cover parts of the surface non-visible in the training images that are likely to be on the same level set as the visible parts, and (b) adding these camera poses to the set of cameras used for sampling the points when applying Poisson reconstruction. We will update the code in a near future to include this. To convert a video to images, you can install ffmpeg and run the following command: where is the desired sampling rate of the video images. An FPS value of 1 corresponds to sampling one image per second. We recommend to adjust the sampling rate to the length of the video, so that the number of sampled images is between 100 and 300.

    2. Estimate camera poses with COLMAP

    Please first install a recent version of COLMAP (ideally CUDA-powered) and make sure to put the images you want to use in a directory /input. Then, run the script gaussian_splatting/convert.py from the original Gaussian splatting implementation to compute the camera poses from the images using COLMAP. Please refer to the original 3D Gaussian Splatting repository for more details. Sometimes COLMAP fails to reconstruct all images into the same model and hence produces multiple sub-models. The smaller sub-models generally contain only a few images. However, by default, the script convert.py will apply Image Undistortion only on the first sub-model, which may contain only a few images. If this is the case, a simple solution is to keep only the largest sub-model and discard the others. To do this, open the source directory containing your input images, then open the sub-directory /distorted/sparse/. You should see several sub-directories named 0/, 1/, etc., each containing a sub-model. Remove all sub-directories except the one containing the largest files, and rename it to 0/. Then, run the script convert.py one more time but skip the matching process: Note: If the sub-models have common registered images, they could be merged into a single model as post-processing step using COLMAP; However, merging sub-models requires to run another global bundle adjustment after the merge, which can be time consuming.

    The view_sugar_results.ipynb notebook and the metrics.py script provides examples of how to load a refined SuGaR model for rendering a scene with the hybrid representation and the Gaussian Splatting rasterizer. We will add more details about this in a near future.

    We also provide in the blender directory several python scripts to export from Blender composition and animation data of SuGaR meshes modified or animated within Blender. Additionally, we provide in the sugar_scene/sugar_compositor.py script a Python class that can be used to import such animation or composition data into PyTorch and apply it to the SuGaR hybrid representation.

    The hybrid representation allows for high-quality rendering of the scene with the Gaussian Splatting rasterizer, as shown below.

    The usage of these scripts and class may be a bit tricky, so we will add a detailed tutorial on how to use them in a near future.

    To evaluate the quality of the reconstructions, we provide a script metrics.py that computes the PSNR, SSIM and LPIPS metrics on test images. Start by optimizing SuGaR models for the desired scenes and a regularization method ("density" or "sdf"), then create a .json config file containing the paths to the scenes in the following format: {source_images_dir_path: vanilla_gaussian_splatting_checkpoint_path}.

    Finally, run the script as follows:

  3. Sugar is a TV series about a private eye who investigates a missing woman with ties to Hollywood. IMDb provides cast and crew information, episode guide, user and critic reviews, trivia, and more.

    • (24K)
    • 2024-04-05
    • Crime, Drama, Mystery
    • Colin Farrell, Kirby, Amy Ryan
  4. 27. Juni 2024 · Learn about sugar, a sweet, water-soluble compound and a simple carbohydrate, from Britannica. Find out how sugar is extracted from sugarcane and sugar beets, and how it is refined and used in foods and beverages.

  5. 29. Juni 2024 · "Sugar" im TV: Darum geht es in dem Drama. Für Melanie geht ein Traum in Erfüllung, als die bekannte Influencerin Chloe die junge Frau zu einer Kreuzfahrt mit Ziel Australien einlädt. Doch der Urlaubstrip geht furchtbar schief. Die beiden Frauen werden nämlich unwissentlich als Drogenkuriere eingesetzt. Als sie mehrere Kilogramm Kokain in ...

  6. 27. Sept. 2021 · Learn how added sugar contributes to obesity, heart disease, and diabetes, and how a national sugar reduction policy could benefit public health and save costs. Find out the sources, names, and tips to limit added sugar in your diet.

  1. Nutzer haben außerdem gesucht nach