Edit model card

Tensorflow Keras Implementation of an Image Captioning Model with encoder-decoder network. πŸŒƒπŸŒ…πŸŽ‘

This repo contains the models and the notebook on Image captioning with visual attention.

Full credits to TensorFlow Team

Background Information

This notebook implements TensorFlow Keras implementation on Image captioning with visual attention. Given an image like the example below, your goal is to generate a caption such as "a surfer riding on a wave". image To accomplish this, you'll use an attention-based model, which enables us to see what parts of the image the model focuses on as it generates a caption. attention The model architecture is similar to Show, Attend and Tell: Neural Image Caption Generation with Visual Attention.

This notebook is an end-to-end example. When you run the notebook, it downloads the MS-COCO dataset, preprocesses and caches a subset of images using Inception V3, trains an encoder-decoder model, and generates captions on new images using the trained model.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
Inference API (serverless) does not yet support generic models for this pipeline type.

Spaces using keras-io/image-captioning 3