本文为您介绍蒙版弹幕接入的前提条件、效果展示、技术原理、操作步骤等内容。
本文档基于您已经在 App 中实现了弹幕的基本渲染功能。
当您想要进一步提升弹幕的用户体验,做到弹幕播放时不遮挡重要信息,可根据此文档实现蒙版弹幕,效果如下:
服务器对视频源进行处理,生成蒙版信息,SDK 在播放过程中获取蒙版信息,以 svg xml 的形式回调给业务,由业务解析 svg 文件并渲染蒙版。 业务自身需具备弹幕渲染功能。
TTSDK 根据当前播放中的画面内容按照时间戳同步地回调当前画面的 SVG 蒙版信息,以 string 形式提供。SVG 示例如下:
<svg version="1.0" xmlns="http://www.w3.org/2000/svg" width="455px" height="256px" viewBox="0 0 455 256" preserveAspectRatio="xMidYMid meet"> <g transform="translate(0,256) scale(0.1,-0.1)" fill="#000000"> <path d="M0 1280 l0 -1280 694 0 695 0 19 53 c11 28 36 84 55 122 20 39 45 104 56 145 18 67 48 157 96 290 9 25 22 52 30 60 42 49 89 220 99 363 7 100 51 230 89 264 12 11 63 33 112 47 50 15 117 35 150 45 56 17 60 20 63 51 3 30 -10 73 -61 195 -11 28 -24 70 -27 95 -4 25 -13 66 -21 92 -10 38 -11 61 0 125 14 91 17 187 7 224 -6 18 -3 31 9 42 20 21 68 22 84 3 25 -30 66 -29 133 5 112 57 218 33 288 -65 19 -27 61 -78 92 -113 36 -40 60 -77 64 -98 7 -37 -8 -121 -31 -169 -8 -17 -21 -59 -29 -94 -8 -34 -23 -77 -35 -95 -68 -108 -71 -126 -23 -164 20 -16 69 -47 107 -68 39 -20 106 -61 150 -90 44 -29 99 -62 122 -74 23 -11 48 -30 56 -41 11 -16 21 -19 40 -15 38 10 91 -11 132 -53 46 -47 57 -73 44 -110 -22 -65 -123 -81 -154 -24 l-14 27 -1 -35 c0 -19 7 -67 15 -105 8 -39 15 -86 15 -105 1 -33 4 -36 65 -64 119 -53 150 -110 120 -218 -17 -63 -87 -220 -111 -248 -12 -15 -114 -190 -114 -196 0 -2 331 -4 735 -4 l735 0 0 1280 0 1280 -2275 0 -2275 0 0 -1280z"/> </g> </svg>
解析上述 SVG 获取蒙版 Path 效果图如下,图中黑色部分。
根据视频宽高和上述 SVG 可获取人体 Path ,上图中白色部分。
在 View 上绘制弹幕,弹幕由您根据自身业务需求实现,使用 Paint 的 PorterDuff.Mode.DST_OUT 模式绘制人体 Path,渲染。
需要蒙版弹幕的视频须提前在点播空间中转码,请在多媒体 AI 模板中设置蒙版弹幕,并配置在工作流中为对应视频进行转码,生成蒙版信息。
// 1. 使用蒙版弹幕,需在播放前开启此 option engine.setIntOption(TTVideoEngine.PLAYER_OPTION_ENABLE_OPEN_MASK_THREAD, 1); // 2. 通过此 option 可在播放过程中控制蒙版信息回调开关 engine.setIntOption(TTVideoEngine.PLAYER_OPTION_ENABLE_OPEN_BARRAGE_MASK, 1); // 3. 设置蒙版信息回调 engine.setMaskInfoListener(MaskInfoListener); public interface MaskInfoListener { /** * 蒙版信息 * @param pts 时间戳 * @param info svg xml */ void onMaskInfoCallback(int code, int pts, String info); }
// 设置视频资源 url engine.setDirectUrlUseDataLoader(videoUrl, key); // 设置蒙版资源 url engine.setBarrageMaskUrl(barrageMaskUrl);
需 AppServer 在下发的 PlayAuthToken 中签入 “NeedBarrageMask” ,可通过服务端 SDK 概览中的【获取临时安全凭证】(即 PlayAuthToken )文档进行PlayAuthToken的接入。其中参数取值与 获取播放地址 中取值相同。详细说明可参考 快速开始 中设置播放源部分。
解析 SVG,获取人体 Path ,可参考下面代码。
public class DanmakuMaskParseUtil { private static final String NODE_PATH_START = "d"; private static final String NODE_WIDTH = "width"; private static final String NODE_HEIGHT = "height"; private static final String TRANSFORM = "transform"; private static final String SCALE = "scale"; private static final String TRANSLATE = "translate"; private static final Matrix mMatrix = new Matrix(); /** * * @param svgData svg 数据 * @param videoRectF 视频画面 left, top, right, bottom * @param screenWidth 屏幕宽度 * @param screenHeight 屏幕高度 * @return 人体 Path */ public static Path parseSVG(String svgData, RectF videoRectF, int screenWidth, int screenHeight) { if (TextUtils.isEmpty(svgData)) { return new Path(); } Path fullPath = new Path(); float videoViewWidth = videoRectF.width(); float videoViewHeight = videoRectF.height(); float videoLeftPosition = videoRectF.left; float videoTopPosition = videoRectF.top; final String targetSvgData = getSuitSvgStr(svgData); InputStream inputStream = new ByteArrayInputStream(targetSvgData.getBytes(Charsets.UTF_8)); try { final Document document = DocumentBuilderFactory.newInstance() .newDocumentBuilder() .parse(inputStream); final String xpathExpression = "//@*"; final XPath xPath = XPathFactory.newInstance().newXPath(); final XPathExpression expression = xPath.compile(xpathExpression); final NodeList svgPaths = (NodeList) expression.evaluate(document, XPathConstants.NODESET); int maskWidth = 1; int maskHeight = 1; String transform = ""; List<Path> pathList = new ArrayList<>(); // parse get mask path for (int i = 0; i < svgPaths.getLength(); i++) { final Node node = svgPaths.item(i); switch (node.getNodeName()) { case NODE_PATH_START: final Path path = PathParser.createPathFromPathData(node.getTextContent()); pathList.add(path); break; case NODE_WIDTH: maskWidth = Integer.parseInt(node.getTextContent().replace("px", "")); break; case NODE_HEIGHT: maskHeight = Integer.parseInt(node.getTextContent().replace("px", "")); break; case TRANSFORM: transform = node.getTextContent(); break; } } float svgTranslateX = 0; float svgTranslateY = 0; float svgScaleX = 1; float svgScaleY = 1; if (!TextUtils.isEmpty(transform)) { Point translate = getTransform(transform, TRANSLATE); if (translate != null) { svgTranslateX = translate.x; svgTranslateY = translate.y; } Point scale = getTransform(transform, SCALE); if (scale != null) { svgScaleX = scale.x; svgScaleY = scale.y; } } if (pathList.size() > 0) { RectF fullRectF = new RectF(videoRectF); fullRectF.left = Math.max(fullRectF.left, 0); fullRectF.top = Math.max(fullRectF.top, 0); if (screenHeight != 0 && screenWidth != 0) { if (fullRectF.right > fullRectF.bottom) { fullRectF.right = Math.min(screenHeight, videoRectF.right); fullRectF.bottom = Math.min(screenWidth, videoRectF.bottom); } else { fullRectF.right = Math.min(screenWidth, videoRectF.right); fullRectF.bottom = Math.min(screenHeight, videoRectF.bottom); } } fullPath.addRect(fullRectF, Path.Direction.CW); } // get human face path for (final Path originPath : pathList) { mMatrix.reset(); mMatrix.postScale(svgScaleX, svgScaleY); mMatrix.postTranslate(svgTranslateX, svgTranslateY); final float scaleX = videoViewWidth * 1.0f / maskWidth; final float scaleY = videoViewHeight * 1.0f / maskHeight; mMatrix.postScale(scaleX, scaleY); Path path = new Path(originPath); path.transform(mMatrix); // human face path if (Build.VERSION.SDK_INT >= 21) { fullPath.op(path, Path.Op.DIFFERENCE); } } } catch (Exception e) { e.printStackTrace(); } finally { try { inputStream.close(); } catch (IOException e) { e.printStackTrace(); } } return fullPath; } private static Point getTransform(final String transformStr, final String fun) { int index = transformStr.indexOf(fun); if (index == -1) { return null; } String valueStr = transformStr.substring(index + fun.length()); String regex = "\\((\\-|\\+)?\\d+(\\.\\d+)?,(\\-|\\+)?\\d+(\\.\\d+)?\\)"; Pattern compile = Pattern.compile(regex); Matcher matcher = compile.matcher(valueStr); if (!matcher.lookingAt()) { return null; } int end = matcher.end(); String[] result = valueStr.substring(0, end).split(","); if (result.length != 2) { return null; } String x = result[0].substring(1); String y = result[1].substring(0, result[1].length() - 1); return new Point( Float.parseFloat(x), Float.parseFloat(y)); } private static String getSuitSvgStr(String originStr) { final int index = originStr.indexOf("</svg>"); if (index != -1) { return originStr.substring(0, index + 6); } return originStr; } private static class Point { private float x; private float y; public Point(final float x, final float y) { this.x = x; this.y = y; } } }
在 View 上绘制弹幕,使用 Paint 的 PorterDuff.Mode.DST_OUT 模式绘制人体 Path,实现蒙版弹幕。
int layer = canvas.saveLayer(0, 0, mWidth, mHeight, null, Canvas.ALL_SAVE_FLAG); // 1. 绘制弹幕 IRenderer.RenderingState rs = handler.draw(canvas); // 2. 使用 PorterDuff.Mode.DST_OUT 绘制人体 Path Paint paint = new Paint(); paint.setColor(Color.WHITE); paint.setXfermode(new PorterDuffXfermode(PorterDuff.Mode.DST_OUT)); for (final Path path : mMasks) { canvas.drawPath(path, paint); } paint.setXfermode(null); canvas.restoreToCount(layer);