CLI Reference

This page details the commands available in Codex CLI Studio.

codex_cli.main.explain(ctx: ~typer.models.Context, input_str: str = <typer.models.ArgumentInfo object>, detail: str = <typer.models.OptionInfo object>, lang: str = <typer.models.OptionInfo object>)[source]

Process the explain command.

codex_cli.main.script(ctx: ~typer.models.Context, task_description: str = <typer.models.ArgumentInfo object>, output_type: str = <typer.models.OptionInfo object>, dry_run: bool = <typer.models.OptionInfo object>)[source]

Process the script command.

codex_cli.main.visualize(ctx: ~typer.models.Context, file_path: ~pathlib._local.Path = <typer.models.ArgumentInfo object>, output_file: ~pathlib._local.Path = <typer.models.OptionInfo object>, output_format: str | None = <typer.models.OptionInfo object>)[source]

Process the visualize command.

codex_cli.main.config_explain(ctx: ~typer.models.Context, file_path: ~pathlib._local.Path = <typer.models.ArgumentInfo object>)[source]

Process the config explain subcommand.

Command Modules

codex_cli.explain.explain_code(input_str: str, detail: str = 'basic', lang: str = 'en')[source]

Explains a code snippet, shell command, or the content of a file.

codex_cli.script.clean_generated_code(code: str, language: str) str[source]

Removes potential markdown code fences and leading/trailing whitespace.

codex_cli.script.generate_script(task_description: str, output_type: str = 'bash', dry_run: bool = False)[source]

Generates a script based on a natural language task description.

Parameters:
  • task_description – The description of the task for the script.

  • output_type – The desired script type (e.g., “bash”, “python”). Defaults to “bash”.

  • dry_run – If True, indicates that the script should only be displayed (currently always true).

class codex_cli.visualize.CallGraphVisitor[source]

Visits AST nodes to build a function call graph within a module. Stores calls as a dictionary: {caller_function_name: {callee_function_name, …}}

visit_FunctionDef(node: FunctionDef)[source]

Visit a function definition node.

visit_Call(node: Call)[source]

Visit a function call node.

codex_cli.visualize.generate_call_graph_dot(call_graph: dict[str, set[str]], graph_name: str = 'CallGraph') Digraph[source]

Generates a graphviz.Digraph object representing the call graph.

Parameters:
  • call_graph – Dictionary mapping caller function names to sets of callee names.

  • graph_name – Name attribute for the generated graph.

Returns:

A graphviz.Digraph object.

codex_cli.visualize.is_tool_available(name: str) bool[source]

Check whether name command is available in the system’s PATH.

codex_cli.visualize.generate_visualization(file_path: str, output_dot_or_image_file: str | None = None, output_format: str | None = None)[source]

Parses a Python file, builds a call graph, and saves it as a DOT file or renders it to an image format using the Graphviz ‘dot’ command.

Parameters:
  • file_path – Path to the Python file (.py) to analyze.

  • output_dot_or_image_file – Path to save the output (DOT or image). If None, name is based on input file.

  • output_format – The desired output format (e.g., png, svg, pdf, dot, gv). Format is inferred from output_file extension if not specified, defaulting to ‘gv’ (DOT).

codex_cli.config.explain_config(file_path: Path)[source]

Reads a configuration file and asks an AI model to explain it.

Parameters:

file_path – Path object pointing to the configuration file.

Core Utilities

codex_cli.core.openai_utils.get_openai_client() OpenAI | None[source]

Initializes and returns the OpenAI client.

Reads the API key from the OPENAI_API_KEY environment variable.

Returns:

An initialized OpenAI client instance or None if initialization fails.

codex_cli.core.openai_utils.get_openai_response(prompt: str, model: str = 'gpt-4o') str | None[source]

Sends a prompt to the specified OpenAI model and returns the response.

Parameters:
  • prompt – The prompt string to send to the model.

  • model – The OpenAI model identifier (e.g., “gpt-4o”).

Returns:

The model’s response content as a string, or None if an error occurs.