ProgrammingGuide_Camera_1.3.2
ProgrammingGuide_Camera_1.3.2
Programming Guide
Version 1.3.2
Camera SDK Processors provide the user with features such as High Dynamic Range, Low-Light
environment photography, Depth of Field, and Landscape photography (Panorama).
1.2 Architecture
The architecture consists of:
Native Engine
Platform
HAL
Linux
camera camera.processor
SCaptureFailure STotalCaptureResult
SDngCreator
camera.filter
SCameraImage
Replace xxx in the image above with camera, camera.processor, camera.filter or camera.image
to get the full package name in question.
Camera: This package provides basic functionalities that are required for camera
initialization, capture, recording, etc. It allows access to individual camera devices
connected to a Samsung Android device. It also allows management of Samsung specific
hardware controls like Phase Auto-Focus, Metering Mode, Real Time HDR, etc.
Processor: This package offers various camera add-on functionalities like HDR, Low Light,
and Depth of Field capture, thereby enabling the users to capture in different modes.
1.6 Components
Components
o camera-v1.3.2.aar
o camera-v1.3.2-light.aar
o sdk-v1.0.0.jar
Imported package
o com.samsung.android.sdk.camera
1. Add camera-v1.3.2.aar and sdk-v1.0.0.jar to the libs folder to support all features of
Camera SDK (Figure 3).
repositories {
flatDir {
dependencies {
// before gradle 3.4
compile(name: 'camera-v1.3.2-light', ext: 'aar') // for full package
compile(name: 'camera-v1.3.2-light', ext: 'aar') // for light package
compile files('libs/sdk-v1.0.0.jar')
4. Add the following permission to the application manifest if the application needs
Camera access:
<uses-permission android:name="android.permission.CAMERA"/>
5. Select Android 5.0 or higher as a project build target in the project properties.
<uses-permission android:name=
"com.samsung.android.provider.filterprovider.permission.READ_FILTER" />
<uses-permission android:name=
"com.samsung.android.provider.stickerprovider.permission.READ_STICKER_P
ROVIDER" />
<uses-permission android:name=
"com.samsung.android.aremoji.provider.permission.READ_STICKER_PROVIDER"
/>
<uses-permission android:name=
"com.samsung.android.providers.context.permission.WRITE_USE_APP_FEATURE
_SURVEY" />
} catch (SsdkUnsupportedException e) {
if (eType == SsdkUnsupportedException.VENDOR_NOT_SUPPORTED) {
// The device is not a Samsung device.
} else if (eType == SsdkUnsupportedException.DEVICE_NOT_SUPPORTED) {
// The device does not support Camera.
} else if (eType == SsdkUnsupportedException.SDK_VERSION_MISMATCH) {
// There is a SDK version mismatch.
}
}
mSCameraManager = mScamera.getSCameraManager();
int versionCode = mScamera.getVersionCode();
String versionName = mScamera.getVersionName();
initializes Camera
loads the Native Engine of SCameraSDK
It defines the following common fields which are used as a key for SCameraProcessParameter
class and defines the guidelines for each SCameraProcessor objects for execution. Also, the
Processor specific parameter keys (which are defined by the processor itself) are explained
under the respective processor.
Each Processor requires an initial set of values to some of its keys. The
SCameraProcessorParameter class is used to supply the values to these keys.
The SCameraProcessorParameter class is used to set and get the values of keys defined in
SCameraProcessor for each processor. It has the following methods:
Before using any Processor, provide the initial settings to it and make a call to the initialize()
method. This method initializes the processor and prepares it for use. After using it, make a call
to deinitialize(). This method frees the Processor resources held with it.
For the SDK version 1.1.0, the close() method is added as a common interface of the
SCameraProcessor. Once the SCameraProcessor is closed, all methods in it will throw an
IllegalStateException.
NOTE: The initialization keys cannot be changed after initialize() is called. To change key
values, deinitialize() needs to be called first. Initialization keys for each processor are listed
under the Initializing Processor section.
mProcessor = mSCameraProcessorManager.getProcessorInstance(
SCameraProcessorManager.PROCESSOR_TYPE_PANORAMA);
SCameraProcessorParameter param = mProcessor.getParameters();
param.set(SCameraPanoramaProcessor.STILL_OUTPUT_FORMAT, ImageFormat.JPEG);
param.set(SCameraPanoramaProcessor.STREAM_SIZE, previewSize);
param.set(SCameraPanoramaProcessor.CAMERA_ID, Integer.parseInt(mCameraId));
mProcessor.setParameters(param);
mProcessor.initialize();
/* do something with SDK */
mProcessor.deinitialize();
SCameraLowLightProcessor mProcessor =
3.1 SCameraHdrProcessor
SCameraHdrProcessor creates a high-definition image from a set of 3 images captured with
different exposure values. This class inherits from SCameraProcessor.
SCameraProcessorManager processorManager =
mSCamera.getSCameraProcessorManager();
if(!processorManager.isProcessorAvailable
(SCameraProcessorManager.PROCESSOR_TYPE_HDR)) {
//This device does not support HDR Processor.
return false;
}
param.set(SCameraHdrProcessor.STILL_INPUT_FORMAT, ImageFormat.JPEG);
param.set(SCameraHdrProcessor.STILL_OUTPUT_FORMAT, ImageFormat.JPEG);
param.set(SCameraHdrProcessor.STILL_SIZE, new Size(mImageReader.getWidth(),
mImageReader.getHeight()));
mProcessor.setParameters(parameter);
mProcessor.initialize();
Error conditions can be handled in onError, and after process completion, we get the final
image.
mProcessor.setEventCallback(new SCameraHdrProcessor.EventCallback() {
@Override
public void onProcessCompleted(Image result) {
//Result image handling needs to be done here
@Override
public void onError(int code) {
//Error code handling
}
}, mBackgroundHandler);
Three images with different exposure values are captured and then supplied to the
requestMultiProcess() method of SCameraHdrProcessor. The exposure values of all three
MAX_COUNT =
param.get(SCameraHdrProcessor.MULTI_INPUT_COUNT_RANGE).getUpper();
mImageList = new ArrayList<Image>();
mJpegReader = ImageReader.newInstance(jpegsize.getWidth(), jpegsize.getHeight(),
ImageFormat.JPEG, MAX_COUNT + 1);
mJpegReader.setOnImageAvailableListener(new
ImageReader.OnImageAvailableListener() {
@Override
public void onImageAvailable(ImageReader reader) {
CUR_COUNT++;
showTextToDebugMessageView("Input image captured(" + CUR_COUNT + "/" +
MAX_COUNT + ")");
mImageList.add(reader.acquireNextImage());
if(CUR_COUNT == MAX_COUNT) {
mProcessor.requestMultiProcess(mImageList);
int d = 0;
for(Image i : mImageList) {
mImageSaver.save(i, "Hdr_INPUT(" + ++d + ").jpg");
}
mImageList.clear();
}
}
}, mBackgroundHandler);
3.2 SCameraLowLightProcessor
SCameraLowLightProcessor is an image enhancer for images captured under dark, shim, or low
light environments. Multiple images captured at the same exposure are needed as input and are
then combined to remove noise and create a single enhanced output frame.
The processor is
initiated from SCameraProcessManager. The following
SCameraProcessorParameter Keys need values to initialize the SCameraLowLightProcessor:
SCameraProcessorManager processorManager =
mSCamera.getSCameraProcessorManager();
if(!processorManager.isProcessorAvailable
(SCameraProcessorManager.PROCESSOR_TYPE_LOW_LIGHT)) {
//This device does not support Low Light Processor.
return false;
}
mProcessor = processorManager.createProcessor
(SCameraProcessorManager. PROCESSOR_TYPE_LOW_LIGHT);
SCameraProcessorParameter param = mProcessor.getParameters();
param.set(SCameraLowLightProcessor.STILL_INPUT_FORMAT, ImageFormat.JPEG);
param.set(SCameraLowLightProcessor.STILL_OUTPUT_FORMAT, ImageFormat.JPEG);
param.set(SCameraLowLightProcessor.STILL_SIZE, jpegsize);
mProcessor.setParameters(param);
mProcessor.initialize();
Error conditions can be handled in onError, and after process completion, we get the final
image.
mProcessor.setEventCallback(new SCameraLowLightProcessor.EventCallback() {
@Override
public void onProcessCompleted(Image result) {
//Result image handling needs to be done here
}
@Override
public void onError(int code) {
//Error code handling
}
}, mBackgroundHandler);
Images with same exposure values are captured and then supplied to the requestMultiImage()
method of SCameraLowLightProcessor.
MAX_COUNT = param.get(SCameraLowLightProcessor.MULTI_INPUT_COUNT_RANGE)
.getUpper();
mImageList = new ArrayList<Image>();
mJpegReader = ImageReader.newInstance(jpegsize.getWidth(),
jpegsize.getHeight(), ImageFormat.JPEG, MAX_COUNT + 1);
mJpegReader.setOnImageAvailableListener(new
ImageReader.OnImageAvailableListener() {
@Override
public void onImageAvailable(ImageReader reader) {
CUR_COUNT++;
mImageList.add(reader.acquireNextImage());
if(CUR_COUNT == MAX_COUNT) {
mProcessor. requestMultiProcess(mImageList);
for(Image i : mImageList) i.close();
mImageList.clear();
}
}
}
}, mBackgroundHandler);
3.3 SCameraPanoramaProcessor
SCameraPanoramaProcessor allows the developer to capture landscape panoramic views by
capturing multiple images and assisting the developer with the capture direction and stitching
of the final output image. (SCameraPanoramaProcessor has been deprecated from version 1.3.1)
Panoramic views are captured as a series of captured images in any direction. It can be any of
the four directions available (i.e., down, up, left, right).Once capture starts in a chosen
direction, SCameraPanoramaProcessor provides the user with the following messages to assist
in capturing:
SCameraProcessorManager processorManager =
mSCamera.getSCameraProcessorManager();
if(!processorManager.isProcessorAvailable
(SCameraProcessorManager. PROCESSOR_TYPE_PANORAMA)) {
//This device does not support Panorama Processor.
return false;
}
mProcessor = processorManager.createProcessor
(SCameraProcessorManager. PROCESSOR_TYPE_PANORAMA);
param.set(SCameraPanoramaProcessor.STILL_OUTPUT_FORMAT, ImageFormat.JPEG);
param.set(SCameraPanoramaProcessor.STREAM_SIZE, previewSize);
param.set(SCameraPanoramaProcessor.CAMERA_ID, Integer.parseInt(mCameraId));
mProcessor.setParameters(param);
mProcessor.initialize();
Unlike HDR and Low-Light, SCameraPanoramaProcessor requires the preview data continuously
to assist with the direction of capture. It also selectively captures images which it finally stitches
to the output image.
SCaptureRequest.Builder builder =
mSCameraDevice.createCaptureRequest(SCameraDevice.TEMPLATE_PREVIEW);
builder.addTarget(mSurface);
builder.addTarget(mProcessor.getInputSurface());
mSCameraCaptureSession.setRepeatingRequest(builder.build(),
this, mBackgroundHandler);
Callbacks for direction assistance and live preview stitching are provided by the
SCameraPanoramaProcessor.EventCallback interface. It is important to implement the
interface to get the required callback from SDK. SCameraPanoramaProcesor.EventCallback
provides the following interface methods:
onDirectionChanged(int direction): Called when the direction hint is given from the
framework library
onError(int code): Called to notify errors to the application. Code supplied is error
code
onLivePreviewDataStitched(Bitmap data): Called to send the live stitching preview
data
onMaxFramesCaptured(): Called when all (maximum) frames required for Panorama
are captured and the framework will start the final stitching process
onMovingTooFast(): Called to notify the application to display help information to
move the device slowly
onProcessCompleted(Image result): Called to deliver the stitched final result. Once
onProcessCompleted() callback is received, stop() or cancel() should not be called
onRectChanged(int x, int y): Called when the panorama capturing rectangle has
changed. The range of the rectangle is [-1000, -1000 – 1000, 1000]
onStitchingProgressed(int progress): Called to notify the progress of stitching the
panorama to the application after complete capture. The range of progress notification
is 0 - 100
@Override
public void onError(int code) {
//Error handling code
@Override
public void onRectChanged(int x, int y) {
//Rect updates
@Override
public void onDirectionChanged(int direction) {
//Direction updates
@Override
public void onStitchingProgressed(int progress) {
//Stitching progress
@Override
public void onLivePreviewDataStitched(Bitmap bitmap) {
//Live Stitched bitmap
@Override
public void onMaxFramesCaptured() {
//Max frames reached.
@Override
public void onMovingTooFast() {
//Moving fast.
@Override
public void onProcessCompleted(Image image) {
//Final output image
}
}, mBackgroundHandler);
The SCameraDepthOfFieldProcessor can be used to take two images as input. One image is
focused to user selection and the other is at infinity focus.
SCameraProcessorManager processorManager =
mSCamera.getSCameraProcessorManager();
if(!processorManager.isProcessorAvailable
(SCameraProcessorManager. PROCESSOR_TYPE_DEPTH_OF_FIELD)) {
//This device does not support DOF Processor.
return false;
}
mProcessor = processorManager.createProcessor
(SCameraProcessorManager. PROCESSOR_TYPE_DEPTH_OF_FIELD);
param.set(SCameraDepthOfFieldProcessor.STILL_INPUT_FORMAT, ImageFormat.JPEG);
param.set(SCameraDepthOfFieldProcessor.STILL_OUTPUT_FORMAT, ImageFormat.JPEG);
1. Capture two images, one at infinity focus and the other one focused to user selection,
and pass them as input to requestMultiImage () as an ImageList.
mProcessor.requestMultiImage(mImageList);
mProcessor.setEventCallback(new SCameraOutFocusProcessor.EventCallback() {
@Override
public void onProcessCompleted(Image image) {
/* Result Image arrived. Save it or do other processing method */
....
}
@Override
public void onError(int error) {
/* Handling Error */
The SCameraFilter class is used to specify particular filter effects to be used with
SCameraEffectProcessor. To use filters, the following permission needs to be declared in
AndroidManifest.xml:
<uses-permission android:name=
"com.samsung.android.provider.filterprovider.permission.READ_FILTER" />
<uses-permission android:name=
" com.samsung.android.provider.stickerprovider.permission.READ_STICKER_PROVIDER"
/>
To verify if the SCameraFilter feature is supported on the device, please use the
isFeatureEnabled() method.
List<SCameraFilterInfo> mFilterInfoList =
mScamera.getSCameraFilterManager().getAvailableFilters();
SCameraFilterInfo filterInfo = mFilterInfoList.get(0);
SCameraFilter mFilter = mSCameraFilterManager.createFilter(filterInfo);
mFilter.setParameter(paramName, progress + paramRange.getLower());
After initializing the filter, the SCameraFilter class can be used for applying image
processing techniques according to the needs of an image. It can be used to process Bitmap
data using the processImage() call.
mFilter.processImage(inputFile, outputFile);
NOTE: If you are working with a SCameraFilter instance with filter type FILTER_TYPE_FACE_AR
or FILTER_TYPE_AR EMOJI, processImage(...) will not work. For those filters please use
requestSnapCapture().
SCameraProcessorManager processorManager =
mSCamera.getSCameraProcessorManager();
if(!processorManager.isProcessorAvailable
(SCameraProcessorManager. PROCESSOR_TYPE_EFFECT)) {
//This device does not support Effect Processor.
return false;
}
mProcessor = processorManager.createProcessor
mFilterInfoList = mSCameraFilterManager.getAvailableFilters();
param.set(SCameraEffectProcessor.STILL_SIZE, jpegsize);
param.set(SCameraEffectProcessor.STILL_INPUT_FORMAT, ImageFormat.JPEG);
param.set(SCameraEffectProcessor.STILL_OUTPUT_FORMAT, ImageFormat.JPEG);
param.set(SCameraEffectProcessor.STREAM_SIZE, previewSize);
param.set(SCameraEffectProcessor.FILTER_EFFECT,
mSCameraFilterManager.createFilter(mFilterInfoList.get(1)));
mProcessor.setParameters(param);
mProcessor.initialize();
To set the input image, pass the captured image as an input to the requestProcess() call.
The output image can be saved in registered eventCallback onProcessCompleted() method.
mProcessor. requestProcess(image);
NOTE: If you are working with a filter with FILTER_TYPE_FACE_AR filter type, you must use
requestSnapCature(). Unlike the requestProcess(), it does not require input image. Instead,
it will provide the snap-shot of a filter effect processed preview image.
mProcessor. requestSnapCapture();
mProcessor.setEventCallback(new SCameraEffectProcessor.EventCallback() {
@Override
public void onProcessCompleted(Image image) {
/* Result Image arrived. Save it or do other processing method */
....
@Override
public void onError(int i) {
/* Handling Error */
....
}
}, mBackgroundHandler);
1. Preview surface or media recorder surface is given as output surface for effect processor
by calling setOutputSurface().
2. Input surface from SCameraEffectProcessor is given as target to
SCaptureRequest.
3. Processing is started and stopped by calling startStreamProcessing() and
stopStreamProcessing(), respectively, on SCameraEffectProcessor on
the object.
mProcessor.setOutputSurface(mPreviewSurface);
public void startPreview(){
try {
// Starts displaying the preview.
mSCameraSession.setRepeatingRequest(mPreviewBuilder.build(),
mSessionCaptureCallback, mBackgroundHandler);
setState(CAMERA_STATE.PREVIEW);
// must be called after setRepeatingRequest (includes surface of camera
preview).
mProcessor.startStreamProcessing();
} catch (CameraAccessException e) {
// Handle Error.
}
}
SCameraEffectProcessor can also be used while video recording from the camera. The
developer can choose the effect and apply it directly on video while recording it.
mMediaRecorder.setVideoFrameRate(MAX_PREVIEW_FPS);
mMediaRecorder.setVideoSize(mVideoSize.getWidth(), mVideoSize.getHeight());
mMediaRecorder.setVideoEncoder(MediaRecorder.VideoEncoder.H264);
mMediaRecorder.setAudioEncoder(MediaRecorder.AudioEncoder.AAC);
mMediaRecorder.setOrientationHint(getJpegOrientation());
mMediaRecorder.prepare();
}
mProcessor.setRecordingSurface(null);
// Stops recording
mMediaRecorder.stop();
MediaRecorder.reset();
if (isPausing == false) {
try {
prepareMediaRecorder();
} catch (IOException e) {
e.printStackTrace();
}
}
setState(CAMERA_STATE.PREVIEW);
}
return false;
}
mProcessor = processorManager.createProcessor
(SCameraProcessorManager.PROCESSOR_TYPE_HAZE_REMOVE);
mFilterInfoList = mSCameraFilterManager.getAvailableFilters();
param.set(SCameraEffectProcessor.STILL_SIZE, jpegsize);
param.set(SCameraEffectProcessor.STILL_INPUT_FORMAT, ImageFormat.JPEG);
param.set(SCameraEffectProcessor.STILL_OUTPUT_FORMAT, ImageFormat.JPEG);
param.set(SCameraEffectProcessor.STREAM_SIZE, previewSize);
param.set(SCameraEffectProcessor.HAZE_REMOVE_STRENGTH, 3);
mProcessor.setParameters(param);
mProcessor.initialize();
To set the input image, pass the captured image as an input to the requestProcess() call.
The output image can be saved in registered eventCallback onProcessCompleted() method.
mProcessor.requestProcess(image);
mProcessor.setEventCallback(new SCameraHazeRemoveProcessor.EventCallback() {
@Override
public void onProcessCompleted(Image image) {
/* Result Image arrived. Save it or do other processing method */
....
}
mProcessor.setOutputSurface(mPreviewSurface);
public void startPreview(){
try {
// Starts displaying the preview.
mSCameraSession.setRepeatingRequest(mPreviewBuilder.build(),
mSessionCaptureCallback, mBackgroundHandler);
setState(CAMERA_STATE.PREVIEW);
mProcessor.startStreamProcessing();
} catch (CameraAccessException e) {
// Handle Error.
}
}
SCameraProcessorManager processorManager =
mSCamera.getSCameraProcessorManager();
if(!processorManager.isProcessorAvailable
(SCameraProcessorManager.PROCESSOR_TYPE_GIF)) {
//This device does not support GIF Processor.
return false;
}
mProcessor = processorManager.createProcessor
(SCameraProcessorManager.PROCESSOR_TYPE_GIF);
SCameraProcessorParameter param = mProcessor.getParameters();
mProcessor.setParameters(param);
mProcessor.initialize();
Multiple images are captured and then supplied as a list of bitmaps to the
requestMultiProcess() method of SCameraGifProcessor.
if(mImageList.size() == MAX_COUNT)
mProcessor.requestMultiProcess(mImageList);
}
}, mBackgroundHandler);
Error conditions can be handled in onError and after process completion final GIF image
outputted as byte array.
mProcessor.setEventCallback(new SCameraGifProcessor.EventCallback() {
@Override
public void onProcessCompleted(byte[] data) {
//Result Image arrived. Save it
}
@Override
public void onError(int code) {
//Error handling code
}
}, mBackgroundHandler);
4. Using Image
Image is a framework which works in heterogeneous environments in which application
developers will get the maximum benefit in terms of performance. This framework supports
Before calling any of the image package methods, an SCamera instance has to be created and
initialized in order to load native side dependent libraries.
An instance can be created from an SCameraImage with a default input format or with the
following supported input formats:
FORMAT_RGB24
FORMAT_NV21
FORMAT_YUYV
The default format is FORMAT_NV21. The preferred format is FORMAT_NV21 for high
algorithm performance.
The SCameraImage class provides methods to get and set the image color and image pixels.
It provides APIs for getting the processed image bitmap and save them in jpeg or raw format.
SCamera mScamera;
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
mScamera = new SCamera();
try {
mScamera.initialize(this);
} catch (SsdkUnsupportedException e) {
}
SCameraImage image;
image=new SCameraImage(mImagePath,SImage.FORMAT_DEFAULT);
//perform processing using Image package API
. . .
image.saveAsJpeg(“/sdcard/DCIM/sample.jpg” 95);
image.saveAsRaw(“/sdcard/DCIM/sample.raw”);
}
}
SCameraImage's buffer will be released when the instance is finalized; however, calling
release() method is recommended for efficient memory management.
4.2.1 Sobel
The Sobel filter is an image processing technique to create an image that emphasizes edges and
transitions. It calculates the gradient of the source image by applying the Sobel operator in
requested direction. An image gradient is a directional change in the intensity or color of an
image.
mOutputImage = SCameraImageCore.processSobel(mInputImage,100);
mOutputBitmap = getOutputBitmap(mOutputImage);
if(mOutputBitmap == null) {
Toast.makeText(getApplication().getBaseContext(), "Failed to apply effect",
Toast.LENGTH_SHORT).show();
mOutputImage.release();
return false;
} else {
mOutputView.setImageBitmap(mOutputBitmap);
}
Figure 5: Sobel
The median filter is an image processing technique which is often used to remove noise. Each
output pixel contains the median value in the NxN neighborhood around the corresponding
pixel in the input image. N would be the kernel size, in this case, which should always be an odd
number. The median is calculated by first sorting all the pixel values from the surrounding
neighborhood into numerical order and then replacing the pixel being considered with the
middle pixel value. Hence, in a very large kernel size, processing will be slower.
mOutputImage = SCameraImageCore.processMedian(mInputImage,5);
mOutputBitmap = getOutputBitmap(mOutputImage);
if(mOutputBitmap == null) {
Toast.makeText(getApplication().getBaseContext(), "Failed to apply effect" ,
Toast.LENGTH_SHORT).show();
mOutputImage.release();
return false;
} else {
mOutputView.setImageBitmap(mOutputBitmap);
}
Enhance the contrast of the given source image and generate an output image. The API
enhanceContrast takes two input parameters, contrast factor, which denotes the percentage
of contrast by which the image needs to be enhanced (0 to 1), and pivot, which represents the
color intensity value about which the contrast needs to be enhanced. It should range from 0 to
255.
Figure 7: Contrast
It is an image intensifying adjustment technique through which gain in contrast can be seen in
output image.
if(mOutputBitmap == null) {
Toast.makeText(getApplication().getBaseContext(), "Failed to apply effect" ,
Toast.LENGTH_SHORT).show();
mOutputImage.release();
return false;
} else {
mOutputView.setImageBitmap(mOutputBitmap);
}
mOutputImage = SCameraImageCore.warpAffine(mInputImage,mat);
mOutputBitmap = getOutputBitmap(mOutputImage);
mat.release();
if(mOutputBitmap == null) {
Toast.makeText(getApplication().getBaseContext(), "Failed to apply effect" ,
Toast.LENGTH_SHORT).show();
mOutputImage.release();
return false;
} else {
mOutputView.setImageBitmap(mOutputBitmap);
}
Spatial filtering is a neighborhood operation where the value of a pixel in the output image is a
weighted sum of the corresponding neighborhood pixels of input. The weights are defined in
the mask of the filter. The mask could be laplacian, hiboost, Gaussian, etc.
mOutputImage = SCameraImageCore.filterSpatial(mInputImage,matrix);
mOutputBitmap = getOutputBitmap(mOutputImage);
matrix.release();
if(mOutputBitmap == null) {
Toast.makeText(getApplication().getBaseContext(), "Failed to apply effect" ,
Toast.LENGTH_SHORT).show();
mOutputImage.release();
return false;
} else {
mOutputView.setImageBitmap(mOutputBitmap);
}
Supports color conversion from one format to another. The following color conversions are
supported:
Page 45 | Copyright © Samsung Electronics Co., Ltd. All rights reserved.
RGB24 to YUYV
YUYV to RGB24
NV21 to YUYV
YUYV to NV21
RGB24 to NV21
NV21 to RGB24
mOutputView.setImageBitmap(outImage.getBitmap());
inImage.release();
outImage.release();
Hiboost 0 -1 0
-1 5 -1
0 -1 0
Laplacian -1 -1 -1
-1 8 -1
-1 -1 -1
int N =7;
SCameraImageMatrix matrix = new SCameraImageMatrix (N, N);
for(int i =0;i<N;i++) {
for(int j =0;j<N;j++) {
matrix.setAt(i,j,0.11f);
}
}
Though every care has been taken to ensure the accuracy of this document, Samsung Electronics Co.,
Ltd. cannot accept responsibility for any errors or omissions or for any loss occurred to any person,
whether legal or natural, from acting, or refraining from action, as a result of the information contained
herein. Information in this document is subject to change at any time without obligation to notify any
person of such changes.
Samsung Electronics Co. Ltd. may have patents or patent pending applications, trademarks copyrights or
other intellectual property rights covering subject matter in this document. The furnishing of this
document does not give the recipient or reader any license to these patents, trademarks copyrights or
other intellectual property rights.
No part of this document may be communicated, distributed, reproduced or transmitted in any form or
by any means, electronic or mechanical or otherwise, for any purpose, without the prior written
permission of Samsung Electronics Co. Ltd.
All brand names and product names mentioned in this document are trademarks or registered
trademarks of their respective owners.