specless.minigrid.tspenv.TSPEnv

class specless.minigrid.tspenv.TSPEnv(num_locations: int = 5, width: int = 6, height: int = 6, agent_start_pos: Tuple[int, int] = (1, 1), agent_start_dir: int = 0, seed=None, **kwargs)[source]

Bases: MiniGridEnv

TSP Environment with Multiple floor locations with duplicate colors

Methods

agent_sees

Check if a non-empty grid position is visible to the agent

close

After the user has finished using the environment, close contains the code necessary to "clean up" the environment.

gen_obs

Generate the agent's view (partially observable, low-resolution encoding)

gen_obs_grid

Generate the sub-grid observed by the agent.

get_frame

Returns an RGB image corresponding to the whole environment or the agent's point of view.

get_full_render

Render a non-paratial observation for visualization

get_pov_render

Render an agent's POV observation for visualization

get_random_locations

get_view_coords

Translate and rotate absolute grid coordinates (i, j) into the agent's partially observable view (sub-grid).

get_view_exts

Get the extents of the square set of tiles visible to the agent Note: the bottom extent indices are not included in the set if agent_view_size is None, use self.agent_view_size

get_wrapper_attr

Gets the attribute name from the environment.

hash

Compute a hash that uniquely identifies the current state of the environment.

in_view

check if a grid position is visible to the agent

place_agent

Set the agent's starting point at an empty position in the grid

place_obj

Place an object at an empty position in the grid

put_obj

Put an object at a specific position in the grid

relative_coords

Check if a grid position belongs to the agent's field of view, and returns the corresponding coordinates

render

Compute the render frames as specified by render_mode during the initialization of the environment.

reset

Resets the environment to an initial internal state, returning an initial observation and info.

step

Run one timestep of the environment's dynamics using the agent actions.

Attributes

dir_vec

Get the direction vector for the agent, pointing in the direction of forward movement.

front_pos

Get the position of the cell that is right in front of the agent

metadata

np_random

Returns the environment's internal _np_random that if not set will initialise with a random seed.

render_mode

reward_range

right_vec

Get the vector pointing to the right of the agent.

spec

steps_remaining

unwrapped

Returns the base non-wrapped environment.

action_space

observation_space

class Actions(value)

Bases: IntEnum

An enumeration.

agent_sees(x, y)

Check if a non-empty grid position is visible to the agent

close()

After the user has finished using the environment, close contains the code necessary to “clean up” the environment.

This is critical for closing rendering windows, database or HTTP connections. Calling close on an already closed environment has no effect and won’t raise an error.

property dir_vec

Get the direction vector for the agent, pointing in the direction of forward movement.

property front_pos

Get the position of the cell that is right in front of the agent

gen_obs()

Generate the agent’s view (partially observable, low-resolution encoding)

gen_obs_grid(agent_view_size=None)

Generate the sub-grid observed by the agent. This method also outputs a visibility mask telling us which grid cells the agent can actually see. if agent_view_size is None, self.agent_view_size is used

get_frame(highlight: bool = True, tile_size: int = 32, agent_pov: bool = False)

Returns an RGB image corresponding to the whole environment or the agent’s point of view.

Parameters:
  • highlight (bool) – If true, the agent’s field of view or point of view is highlighted with a lighter gray color.

  • tile_size (int) – How many pixels will form a tile from the NxM grid.

  • agent_pov (bool) – If true, the rendered frame will only contain the point of view of the agent.

Returns:

A frame of type numpy.ndarray with shape (x, y, 3) representing RGB values for the x-by-y pixel image.

Return type:

frame (np.ndarray)

get_full_render(highlight, tile_size)

Render a non-paratial observation for visualization

get_pov_render(tile_size)

Render an agent’s POV observation for visualization

get_view_coords(i, j)

Translate and rotate absolute grid coordinates (i, j) into the agent’s partially observable view (sub-grid). Note that the resulting coordinates may be negative or outside of the agent’s view size.

get_view_exts(agent_view_size=None)

Get the extents of the square set of tiles visible to the agent Note: the bottom extent indices are not included in the set if agent_view_size is None, use self.agent_view_size

get_wrapper_attr(name: str) Any

Gets the attribute name from the environment.

hash(size=16)

Compute a hash that uniquely identifies the current state of the environment. :param size: Size of the hashing

in_view(x, y)

check if a grid position is visible to the agent

property np_random: Generator

Returns the environment’s internal _np_random that if not set will initialise with a random seed.

Returns:

Instances of np.random.Generator

place_agent(top=None, size=None, rand_dir=True, max_tries=inf)

Set the agent’s starting point at an empty position in the grid

place_obj(obj, top=None, size=None, reject_fn=None, max_tries=inf)

Place an object at an empty position in the grid

Parameters:
  • top – top-left position of the rectangle where to place

  • size – size of the rectangle where to place

  • reject_fn – function to filter out potential positions

put_obj(obj, i, j)

Put an object at a specific position in the grid

relative_coords(x, y)

Check if a grid position belongs to the agent’s field of view, and returns the corresponding coordinates

render()

Compute the render frames as specified by render_mode during the initialization of the environment.

The environment’s metadata render modes (env.metadata[“render_modes”]) should contain the possible ways to implement the render modes. In addition, list versions for most render modes is achieved through gymnasium.make which automatically applies a wrapper to collect rendered frames.

Note

As the render_mode is known during __init__, the objects used to render the environment state should be initialised in __init__.

By convention, if the render_mode is:

  • None (default): no render is computed.

  • “human”: The environment is continuously rendered in the current display or terminal, usually for human consumption. This rendering should occur during step() and render() doesn’t need to be called. Returns None.

  • “rgb_array”: Return a single frame representing the current state of the environment. A frame is a np.ndarray with shape (x, y, 3) representing RGB values for an x-by-y pixel image.

  • “ansi”: Return a strings (str) or StringIO.StringIO containing a terminal-style text representation for each time step. The text can include newlines and ANSI escape sequences (e.g. for colors).

  • “rgb_array_list” and “ansi_list”: List based version of render modes are possible (except Human) through the wrapper, gymnasium.wrappers.RenderCollection that is automatically applied during gymnasium.make(..., render_mode="rgb_array_list"). The frames collected are popped after render() is called or reset().

Note

Make sure that your class’s metadata "render_modes" key includes the list of supported modes.

Changed in version 0.25.0: The render function was changed to no longer accept parameters, rather these parameters should be specified in the environment initialised, i.e., gymnasium.make("CartPole-v1", render_mode="human")

reset(*, seed=None, options=None)

Resets the environment to an initial internal state, returning an initial observation and info.

This method generates a new starting state often with some randomness to ensure that the agent explores the state space and learns a generalised policy about the environment. This randomness can be controlled with the seed parameter otherwise if the environment already has a random number generator and reset() is called with seed=None, the RNG is not reset.

Therefore, reset() should (in the typical use case) be called with a seed right after initialization and then never again.

For Custom environments, the first line of reset() should be super().reset(seed=seed) which implements the seeding correctly.

Changed in version v0.25: The return_info parameter was removed and now info is expected to be returned.

Parameters:
  • seed (optional int) – The seed that is used to initialize the environment’s PRNG (np_random). If the environment does not already have a PRNG and seed=None (the default option) is passed, a seed will be chosen from some source of entropy (e.g. timestamp or /dev/urandom). However, if the environment already has a PRNG and seed=None is passed, the PRNG will not be reset. If you pass an integer, the PRNG will be reset even if it already exists. Usually, you want to pass an integer right after the environment has been initialized and then never again. Please refer to the minimal example above to see this paradigm in action.

  • options (optional dict) – Additional information to specify how the environment is reset (optional, depending on the specific environment)

Returns:

Observation of the initial state. This will be an element of observation_space

(typically a numpy array) and is analogous to the observation returned by step().

info (dictionary): This dictionary contains auxiliary information complementing observation. It should be analogous to

the info returned by step().

Return type:

observation (ObsType)

property right_vec

Get the vector pointing to the right of the agent.

step(action)

Run one timestep of the environment’s dynamics using the agent actions.

When the end of an episode is reached (terminated or truncated), it is necessary to call reset() to reset this environment’s state for the next episode.

Changed in version 0.26: The Step API was changed removing done in favor of terminated and truncated to make it clearer to users when the environment had terminated or truncated which is critical for reinforcement learning bootstrapping algorithms.

Parameters:

action (ActType) – an action provided by the agent to update the environment state.

Returns:

An element of the environment’s observation_space as the next observation due to the agent actions.

An example is a numpy array containing the positions and velocities of the pole in CartPole.

reward (SupportsFloat): The reward as a result of taking the action. terminated (bool): Whether the agent reaches the terminal state (as defined under the MDP of the task)

which can be positive or negative. An example is reaching the goal state or moving into the lava from the Sutton and Barton, Gridworld. If true, the user needs to call reset().

truncated (bool): Whether the truncation condition outside the scope of the MDP is satisfied.

Typically, this is a timelimit, but could also be used to indicate an agent physically going out of bounds. Can be used to end the episode prematurely before a terminal state is reached. If true, the user needs to call reset().

info (dict): Contains auxiliary diagnostic information (helpful for debugging, learning, and logging).

This might, for instance, contain: metrics that describe the agent’s performance state, variables that are hidden from observations, or individual reward terms that are combined to produce the total reward. In OpenAI Gym <v26, it contains “TimeLimit.truncated” to distinguish truncation and termination, however this is deprecated in favour of returning terminated and truncated variables.

done (bool): (Deprecated) A boolean value for if the episode has ended, in which case further step() calls will

return undefined results. This was removed in OpenAI Gym v26 in favor of terminated and truncated attributes. A done signal may be emitted for different reasons: Maybe the task underlying the environment was solved successfully, a certain timelimit was exceeded, or the physics simulation has entered an invalid state.

Return type:

observation (ObsType)

property unwrapped: Env[ObsType, ActType]

Returns the base non-wrapped environment.

Returns:

The base non-wrapped gymnasium.Env instance

Return type:

Env