How to Use Seedance 2.0: Complete Step-by-Step Tutorial

Feb 10, 2026

Seedance 2.0 is one of the most capable AI video generation models available today, offering text-to-video, image-to-video, and multimodal generation with impressive quality. Whether you want to create cinematic short films, product demos, or social media content, this tutorial walks you through every step from your first login to exporting a finished video. By the end, you will know how to write effective prompts, use reference files for consistency, and take advantage of advanced features like multi-shot storytelling and audio-synced lip movement.

Step 1: Access the Platform

Getting started with Seedance 2.0 is straightforward. The model is available through two official platforms depending on your region:

Head to Dreamina and create a free account using your email or a social login. The signup process takes less than a minute.

Free daily credits are provided to every account, so you can start generating videos immediately without paying anything. This makes it easy to experiment and learn the interface before committing to a paid plan. For details on credit allocations and subscription tiers, check our pricing overview.

Once logged in, you will land on the main dashboard. Look for the video generation section, which is where all the magic happens.

Step 2: Understanding the Interface

The Seedance 2.0 workspace is built around a simple input-output design. On the left side, you will find the prompt input area where you describe what you want to generate. On the right side, your generated videos appear in a gallery view.

The platform supports three primary input modes:

  • Text-to-Video: Type a text description and the model generates a video from scratch. This is the most common starting point.
  • Image-to-Video: Upload a starting image and the model animates it into a video. Great for bringing still photos or illustrations to life.
  • Multimodal Input: Combine text, images, video clips, and audio files together using the @ reference system. This is the most powerful mode and what sets Seedance 2.0 apart from many competitors.

You will also find controls for aspect ratio (16:9, 9:16, 1:1, and more), video duration (5s or 10s clips), and resolution settings. Familiarize yourself with these before generating your first video.

Step 3: Writing Your First Prompt

The quality of your output depends heavily on how you write your prompt. Seedance 2.0 responds well to detailed, specific descriptions that cover multiple dimensions of the scene.

What to Include in Your Prompt

A strong prompt should address these elements:

  • Subject and action: What is happening in the scene? Be specific about movements and behaviors.
  • Camera work: Describe the camera angle and movement. Use terms like tracking shot, dolly zoom, aerial view, close-up, or handheld.
  • Lighting and atmosphere: Mention lighting conditions such as golden hour, neon-lit, overcast, or studio lighting.
  • Mood and style: Indicate the overall feel, whether cinematic, documentary, dreamy, or dramatic.
  • Technical details: Include resolution preferences and visual quality cues.

Example Prompt

Here is a well-structured prompt you can try immediately:

A woman walks through a neon-lit Tokyo alley at night, camera follows from behind in a steady tracking shot, rain reflecting neon signs on wet pavement, cinematic 2K

This prompt works because it specifies the subject (a woman walking), the setting (neon-lit Tokyo alley at night), camera movement (steady tracking shot from behind), atmospheric detail (rain, neon reflections on wet pavement), and style (cinematic 2K).

Common Mistakes to Avoid

  • Too vague: "A cool video of a city" gives the model little to work with.
  • Contradictory instructions: "Bright sunny day with dark moody shadows" confuses the generation.
  • Overloaded prompts: Trying to describe 10 different things happening simultaneously often produces chaotic results.

For a comprehensive library of tested prompts across different genres and styles, browse our prompt guide.

Step 4: Using the @ Reference System

One of the most powerful features of Seedance 2.0 is its @ reference system, which lets you upload and reference external files directly in your prompt. This is what enables true multimodal generation.

How It Works

Upload files to the workspace, then reference them in your prompt using the @ symbol followed by the file type and number:

  • @image1, @image2 for uploaded images
  • @video1, @video2 for uploaded video clips
  • @audio1 for uploaded audio files

You can reference up to 12 files in a single generation, giving you extensive control over the output.

Practical Use Cases

Character consistency: Upload a photo of a character as @image1, then write: "The person in @image1 walks along a beach at sunset, medium shot, warm golden light." The model will maintain the character's appearance across generations.

Style transfer: Upload a reference image with a specific visual style as @image1, then describe a different scene: "A mountain landscape in the artistic style of @image1, sweeping aerial shot, morning mist." The model will apply the visual style from your reference.

Audio synchronization: Upload a music track or voiceover as @audio1 and include it in your prompt: "A musician performs on stage, synced to @audio1, dynamic camera cuts between close-up and wide shot." The model will align visual movements to the audio.

Video continuation: Upload a previously generated clip as @video1 and extend it: "Continue @video1, the camera pans right to reveal a hidden garden, soft natural lighting." This creates seamless extensions of existing footage.

Step 5: Advanced Features

Once you are comfortable with basic generation, Seedance 2.0 offers several advanced capabilities that unlock professional-quality results.

Multi-Shot Storytelling with Temporal Cues

You can create multi-shot sequences within a single generation by using temporal cues in your prompt. Structure your prompt with time markers:

[0-3s] Wide establishing shot of a mountain village at dawn, mist rising from valleys. [3-6s] Cut to a close-up of a woman opening a wooden door, warm interior light spilling out. [6-10s] Medium shot of her walking down a cobblestone path, camera follows beside her.

These temporal markers tell the model when to transition between shots, creating a narrative flow within a single video clip.

Video Extension and Continuation

If you have generated a clip and want to extend it, upload the clip as a reference and prompt the model to continue the scene. This is particularly useful for building longer sequences from shorter clips. The model understands the context of the uploaded video and generates visually coherent continuations.

Audio Generation and Lip-Sync

Seedance 2.0 can generate synchronized audio and handle lip-sync for speaking characters. When you include dialogue or voiceover instructions, the model aligns mouth movements with the spoken words. Upload an audio reference with @audio1 for precise synchronization, or let the model generate ambient audio that matches the scene.

Camera Control Keywords

For precise camera work, use these keywords in your prompts:

  • Tracking shot / Follow shot: Camera moves alongside the subject
  • Dolly zoom: The classic Hitchcock effect with changing perspective
  • Crane shot: Vertical camera movement, rising or descending
  • Handheld: Slight natural shake for documentary feel
  • Static shot: Camera remains fixed on a tripod
  • Pan left/right: Horizontal camera rotation
  • Tilt up/down: Vertical camera rotation
  • Zoom in/out: Focal length change

Combining these with specific timing creates sophisticated camera choreography.

Step 6: Exporting Your Video

After generating a video you are satisfied with, exporting is simple:

  1. Preview the generated video in the gallery view. Play it back to check quality and timing.
  2. Select resolution: Choose between 1080p for standard quality or 2K for higher detail. The 2K option uses more credits but delivers noticeably sharper output.
  3. Download: Click the download button to save the video as an MP4 file. The file is ready to use in any video editor or to upload directly to social media platforms.

Videos are saved to your account history, so you can always return and download them again later.

Tips for Best Results

After extensive testing, here are the most effective practices for getting great output from Seedance 2.0:

  1. Be specific about motion: Instead of "a bird flies," write "a hawk glides in slow circles above a forest canopy, wings fully extended, camera tracks from below." Motion descriptions are where detail pays off the most.

  2. Describe the atmosphere, not just the scene: "Warm afternoon light filtering through dusty windows" gives the model much more to work with than "a room with windows." Lighting and mood cues dramatically improve quality.

  3. Use temporal cues for complex scenes: When you want multiple things to happen, sequence them with time markers like [0-3s] and [3-6s] rather than trying to describe everything at once.

  4. Reference images for character consistency: If your project involves the same character across multiple clips, always upload and reference the same character image. This is the most reliable way to maintain visual consistency.

  5. Iterate and refine: Your first generation may not be perfect. Adjust your prompt based on what the model produced, adding or removing details. Small changes in wording can produce significantly different results.

  6. Start simple, then add complexity: Begin with a straightforward scene and gradually add camera movements, lighting details, and temporal cues as you learn what the model responds to best.

For real-world examples of what others are creating, explore our use cases gallery.

What's Next

You now have the knowledge to start creating with Seedance 2.0. Here are some resources to help you go further:

  • Prompt Library: Browse hundreds of tested prompts organized by genre, style, and technique
  • Pricing & Credits: Understand credit costs and choose the right plan for your usage
  • Seedance vs Sora Comparison: See how Seedance 2.0 stacks up against other AI video models
  • FAQ: Get answers to common questions about generation limits, supported formats, and account management
  • API Access: Integrate Seedance 2.0 into your own applications and workflows

The best way to learn is to start generating. Open the platform, try the example prompt from Step 3, and experiment with the @ reference system. Every generation teaches you something new about what Seedance 2.0 can do.

Seedance AI Guide

How to Use Seedance 2.0: Complete Step-by-Step Tutorial | Blog