add PROMPT_GENERATION_MAX_TOKENS and CODE_GENERATION_MAX_TOKENS in docker enviromment (#10040)
This commit is contained in:
@@ -558,6 +558,22 @@ ETL_TYPE=dify
|
||||
# For example: http://unstructured:8000/general/v0/general
|
||||
UNSTRUCTURED_API_URL=
|
||||
|
||||
# ------------------------------
|
||||
# Model Configuration
|
||||
# ------------------------------
|
||||
|
||||
# The maximum number of tokens allowed for prompt generation.
|
||||
# This setting controls the upper limit of tokens that can be used by the LLM
|
||||
# when generating a prompt in the prompt generation tool.
|
||||
# Default: 512 tokens.
|
||||
PROMPT_GENERATION_MAX_TOKENS=512
|
||||
|
||||
# The maximum number of tokens allowed for code generation.
|
||||
# This setting controls the upper limit of tokens that can be used by the LLM
|
||||
# when generating code in the code generation tool.
|
||||
# Default: 1024 tokens.
|
||||
CODE_GENERATION_MAX_TOKENS=1024
|
||||
|
||||
# ------------------------------
|
||||
# Multi-modal Configuration
|
||||
# ------------------------------
|
||||
|
Reference in New Issue
Block a user