How to Retrieve Image From Firebase Database in Android

Our MyRestaurants application is essentially complete! All features we set out to include have been implemented: Users can look up restaurants in their zip code, create secure and personalized accounts, log in and out, and save restaurants to their own custom list in Firebase. We've even included a flexible user interface that can display our content in the best manner for the device's current orientation, and hide any irrelevant elements depending on where the user is viewing a restaurant's details.

Now, how about exploring extra features? It's becoming more and more common to take pictures of the delicious dishes you receive at restaurants. Let's support custom user photos in MyRestaurants, allowing users to take their own thumbnail photos for their saved restaurant's listings in the application.

Icon

First, let's make sure to include a button on our menu to indicate to users that a photo option is available. Download Google Material's camera-alt icon. Select the white PNG option, and place each size included in its corresponding sub-directory in drawable:

completed-icon-directory

Layout

Next, let's create a new menu in our menu resource directory. We'll call it menu_photo.xml and place the following inside:

menu_photo.xml

          <?xml version="1.0" encoding="utf-8"?> <menu xmlns:android="http://schemas.android.com/apk/res/android"     xmlns:app="http://schemas.android.com/apk/res-auto">     <item         android:id="@+id/action_photo"         android:icon="@drawable/ic_camera_alt_white_24dp"         app:showAsAction="always"         android:title="Photo">     </item> </menu>                  

We'll need to inflate this new menu in our RestaurantDetailFragment. Confirm that the following line allowing menu options is present. If not, add it now:

RestaurantDetailFragment.java

          ...     @Override     public void onCreate(Bundle savedInstanceState) {         ...         setHasOptionsMenu(true);     } ...                  

Next, let's inflate our new menu, and include logic to handle user interactions with the menu options:

RestaurantDetailFragment.java

          ...     @Override     public void onCreateOptionsMenu(Menu menu, MenuInflater inflater) {         super.onCreateOptionsMenu(menu, inflater);         if (mSource.equals(Constants.SOURCE_SAVED)) {             inflater.inflate(R.menu.menu_photo, menu);         } else {             inflater.inflate(R.menu.menu_main, menu);         }     }         @Override     public boolean onOptionsItemSelected(MenuItem item) {         switch (item.getItemId()) {             case R.id.action_photo:                 onLaunchCamera();             default:                 break;         }         return false;     } ...                  

Here, we include a conditional statement in onCreateOptionsMenu() that only inflates the photo menu if the user has navigated to RestaurantDetailFragment from the "Saved Restaurants" list. If they did not, only the main menu is inflated.

Then, in onOptionsItemSelected() we include a switch statement that will trigger a method called onLaunchCamera() when the user selects the photo icon from the menu. We'll write this method momentarily.

Launching the Camera

Next, let's define the method we will call when the user selects the camera icon from their menu:

RestaurantDetailFragment.java

          ...     public void onLaunchCamera() {         Intent takePictureIntent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE);         if (takePictureIntent.resolveActivity(getActivity().getPackageManager()) != null) {             startActivityForResult(takePictureIntent, REQUEST_IMAGE_CAPTURE);         }     } ...                  
  • We set up our Intent, providing MediaStore.ACTION_IMAGE_CAPTURE as a parameter. This is an implicit intent that will instruct Android to automatically access the device's camera. MediaStore is a built-in Android class that handles all things media, and ACTION_IMAGE_CAPTURE is the standard intent that accesses the device's camera application.

  • We include a conditional that checks if takePictureIntent.resolveActivity(getPackageManager()) does not equal null. resolveActivity(getPackageManager()) returns the first component capable of handling our intent. Essentially, it's ensuring a camera app is available and accessible. It's important to perform this check, because if we launch our intent and there is no camera application present to handle it, our app will crash.

  • Next, startActivityForResult() launches our intent, indicating that we'd like a result returned from it. In our case, we launch the camera, and retrieve the resulting image. This method takes our new Intent, and the constant REQUEST_IMAGE_CAPTURE.

  • REQUEST_IMAGE_CAPTURE should be an integer value. If it is greater than 1, the result of the action we are launching will be returned automatically in a callback method onActivityResult(), which we will define momentarily. This value may also be used to identify specific results when multiple implicit intents are being triggered, and returning multiple pieces of information back into the app. Because we are only handling one such intent, this constant may be any number greater than 0. For more information, check out the Android Documentation for this method.

Let's make sure this constant is defined at the top of our class now:

RestaurantDetailFragment.java

          public class RestaurantDetailFragment extends BaseFragment implements View.OnClickListener { ...     private static final int REQUEST_IMAGE_CAPTURE = 111; ...                  

As we just discussed, startActivityForResult() will automatically trigger the callback method onActivityResult() when the result of our activity is available. (In our case, a picture the user has taken). We'll override this method in order to snag our picture:

RestaurantDetailFragment.java

          ...     @Override     public void onActivityResult(int requestCode, int resultCode, Intent data) {         if (requestCode == REQUEST_IMAGE_CAPTURE && resultCode == getActivity().RESULT_OK) {             Bundle extras = data.getExtras();             Bitmap imageBitmap = (Bitmap) extras.get("data");             mImageLabel.setImageBitmap(imageBitmap);             encodeBitmapAndSaveToFirebase(imageBitmap);         }     } ...                  

onActivityResult() contains the following information:

  • The requestCode parameter represents the REQUEST_IMAGE_CAPTURE value we provided in the startActivityForResult() method that launched this activity.

  • The resultCode parameter represents the status of the activity (ie: whether it was successfully completed, cancelled, etc.)

  • The data is an Intent object that includes intent extras containing the information being returned. In our case, an image.

This in mind, we're doing the following in the code above:

  • Double-check that the requestCode matches our REQUEST_IMAGE_CAPTURE constant. This confirms that the information being returned is indeed from the request we executed in startActivityForResult().

  • We call getExtras(); on the data object to retrieve the intent extras it contains.

  • We create a new Bitmap object called imageBitmap with the intent extra information under the key "data". (This is our image)

  • mImageLabel.setImageBitmap(imageBitmap); sets our detail view's ImageView to contain the imageBitmap object returned from the camera. This immediately places the new photo in the detail view.

  • We then call a custom method that will encode our image in Base64 and save it to Firebase. Even though the previous line of code immediately sets the ImageView, we must still save it to Firebase if we'd like it to remain there when we re-open the app.

Base 64 Encoding

Thankfully, our existing Firebase database is capable of storing images in several formats. In this lesson we'll use Base64. Base64 is a format of binary-to-text encoding. Essentially, this just means that the very binary of the object being encoded is turned into a really long string. Firebase has very recently begun supporting other formats of photo and videos, too.

Convenient for our purposes, both Android and Firebase have built-in tools to help manage encoding and decoding objects in Base64. We'll use them in order to process this image, save it to Firebase, and later retrieve and decode it.

Saving Encoded Images

In the above code, we called a method encodeBitmapAndSaveToFirebase() with the photo we gathered. Let's write that method now:

RestaurantDetailFragment.java

          ...     public void encodeBitmapAndSaveToFirebase(Bitmap bitmap) {        ByteArrayOutputStream baos = new ByteArrayOutputStream();         bitmap.compress(Bitmap.CompressFormat.PNG, 100, baos);         String imageEncoded = Base64.encodeToString(baos.toByteArray(), Base64.DEFAULT);         DatabaseReference ref = FirebaseDatabase.getInstance()                 .getReference(Constants.FIREBASE_CHILD_RESTAURANTS)                 .child(FirebaseAuth.getInstance().getCurrentUser().getUid())                 .child(mRestaurant.getPushId())                 .child("imageUrl");         ref.setValue(imageEncoded);     } ...                  
  • We create a new ByteArrayOutputStream object and name it baos. The name has no special meaning, it's simply a common go-to naming convention for ByteArrayOutputStream objects. As described in the Android documentation this object is simply a place where we may temporarily store our data while working with it.

  • We compress our image using Android's built-in compress() method. The first argument specifies the format the image should be in. The second argument indicates the quality we'd like to save the image in (this is a 1-100 scale; 100 being the highest possible quality). The third argument represents the ByteArrayOutputStream we've just created, which is where Android will place this compressed information.

  • Next, we use the built-in Firebase method to encode this array of bytes into a long Base64 string. In the arguments, we turn the information we've placed in baos into an array of individual bytes, and specify the type of encoding we'd like to use (the default Base64).

  • Finally, we locate the node containing the current image URL for this specific restaurant on this specific user's saved restaurants list, and overwrite it with our new, encoded image.

Now, we should be able to launch the application, select our camera icon, and take an image. Then, if we navigate to this specific restaurant's node in the current user's saved restaurants, we can see that the URL from the Yelp API has been replaced with a long, Base64 encoded string!

base-64-in-firebase

Note: If you have any issue taking a photograph in your Android emulator, double-check that your emulator has been set up to utilize the computer's webcam as it's camera application, as depicted in this video.

If we navigate away from our SavedRestaurantsListActivity however, you will notice that our the imageUrl property resets back to the original URL provided by Yelp. This is because we're currently resetting the entire Restaurant object in our FirebaseRestaurantListAdapter. We hadn't yet needed to reset a specific property. Let's change our existing setIndexInFirebase() method so that we only ever set the index property rather than reset the entire object:

FirebaseRestaurantListAdapter.java

          private void setIndexInFirebase() {         for (Restaurant restaurant : mRestaurants) {             int index = mRestaurants.indexOf(restaurant);             DatabaseReference ref = getRef(index);             ref.child("index").setValue(Integer.toString(index));         }     }                  

Retrieving and Decoding Images

Now that our images are encoded and saved in Firebase, we need to be able to de-code them to retrieve them and display them back into our application.

List View

Our bindRestaurant() method in FirebaseRestaurantViewHolder currently contains logic for using the Picasso library to handle image resizing in the "Saved Restaurants" list view. Let's also handle decoding our images here:

FirebaseRestaurantViewHolder.java

          ...     public void bindRestaurant(Restaurant restaurant) {         ...         if (!restaurant.getImageUrl().contains("http")) {             try {                 Bitmap imageBitmap = decodeFromFirebaseBase64(restaurant.getImageUrl());                 mRestaurantImageView.setImageBitmap(imageBitmap);             } catch (IOException e) {                 e.printStackTrace();             }         } else {             // This block of code should already exist, we're just moving it to the 'else' statement:             Picasso.with(mContext)                     .load(restaurant.getImageUrl())                     .resize(MAX_WIDTH, MAX_HEIGHT)                     .centerCrop()                     .into(mRestaurantImageView);             nameTextView.setText(restaurant.getName());             categoryTextView.setText(restaurant.getCategories().get(0));             ratingTextView.setText("Rating: " + restaurant.getRating() + "/5");         }         mNameTextView.setText(restaurant.getName());         mCategoryTextView.setText(restaurant.getCategories().get(0));         mRatingTextView.setText("Rating: " + restaurant.getRating() + "/5");     } ...                  
  • First, we check if the image url returned from the database does not contains "http". Because our application returns the image URL available from the Yelp API by default, we know that if "http" is not included in the image saved in our database, then it's not the URL to Yelp, and must be one of our encoded images.

  • We define a new Bitmap object called image, and set it to the equivalent of running decodeFromFirebaseBase64() (which we will write in a moment) on the encoded string. We then set the mRestaurantImageView with our newly-decoded image. We've also included some error handling in the case that this doesn't work as expected.

  • If the image does contain "http", we execute the same block of code using Picasso that we did previously.

*We then set the text in our TextViews as normal.

Next, let's write the method responsible for decoding Base64:

FirebaseRestaurantViewHolder.java

          ... public static Bitmap decodeFromFirebaseBase64(String image) throws IOException {         byte[] decodedByteArray = android.util.Base64.decode(image, Base64.DEFAULT);         return BitmapFactory.decodeByteArray(decodedByteArray, 0, decodedByteArray.length);     } ...                  
  • Here, we simply take the encoded image's string, and use the built-in firebase utility to decode it back into a byte array.

  • Then we use the decodeByteArray() method built-in to Android's BitmapFactory class, as described here, to turn this byte array back into a Bitmap image. The first argument is the byte array itself. The second argument is the position in the array the method should begin decoding at (everything in this array is our image, so we simply start at 0), and the number of places in the array that should be decoded (again, everything in the array is our image, so we instruct it to decode the entire length.)

If we launch our application, we should be able to take our own custom picture for one of our saved restaurants, navigate away, and return to "Saved Restaurants" and still see it in our list!

Detail View

Again, the code we've just added handles decoding our custom images in the list of all saved restaurants. We also want our custom images to appear in the restaurant's individual detail view. Let's handle that now!

We'll include some very similar logic in the RestaurantDetailFragment's onCreateView() method:

RestaurantDetailFragment.java

          ... @Override     public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) {         View view = inflater.inflate(R.layout.fragment_restaurant_detail, container, false);         ButterKnife.bind(this, view);          if (!mRestaurant.getImageUrl().contains("http")) {             try {                 Bitmap image = decodeFromFirebaseBase64(mRestaurant.getImageUrl());                 mImageLabel.setImageBitmap(image);             } catch (IOException e) {                 e.printStackTrace();             }         } else {             // This block of code should already exist, we're just moving it to the 'else' statement:             Picasso.with(view.getContext())                     .load(mRestaurant.getImageUrl())                     .resize(MAX_WIDTH, MAX_HEIGHT)                     .centerCrop()                     .into(mImageLabel);         }     ... ...                  

And define the same method for decoding from RestaurantViewHolder here in RestaurantDetailFragment:

RestaurantDetailFragment.java

          ...  public static Bitmap decodeFromFirebaseBase64(String image) throws IOException {         byte[] decodedByteArray = android.util.Base64.decode(image, Base64.DEFAULT);         return BitmapFactory.decodeByteArray(decodedByteArray, 0, decodedByteArray.length);     } ...                  

Now, we should be able to run the application, add a custom photo to a saved restaurant, navigate away from this restaurant, and see that our image is still there if we later come back to it We can even re-boot the emulator, and our image will still be there!

custom-image

Manifest Options

Now that our application uses the camera feature on our users' devices, let's make sure to detail this accordingly in our manifest.

AndroidManifest.xml

          <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android"     package="com.epicodus.myrestaurants">      <uses-permission android:name="android.permission.INTERNET" />     <uses-feature android:name="android.hardware.camera" android:required="false" />     ... ...                  

Here, we include the line <uses-feature android:name="android.hardware.camera" android:required="false" /> to declare that our application uses the camera. As explained in the Android Documentation, the list in our manifest corresponds to the set of feature constants made available by the Android PackageManager. Each feature an app uses must be specified on its own line.

You may also notice that the above code sets android:required to false. This means that while our application uses the camera, use of the camera isn't downright required to run the application. If we had instead set this to true, and published our app on the Google Play, it would only be displayed to devices that had camera access.


Example GitHub Repo for MyRestaurants

How to Retrieve Image From Firebase Database in Android

Source: https://www.learnhowtoprogram.com/android/gestures-animations-flexible-uis/using-the-camera-and-saving-images-to-firebase

0 Response to "How to Retrieve Image From Firebase Database in Android"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel