七月上旬更新速递丨 聚焦集成、安全与AI深度进化

更新亮点: 本次重点强化系统集成能力与AI认知升级,新增4大核心模块9项资源,优化4项资源,点击标题了解(持续互动赢麦豆,解锁高阶技能)

重点推荐:《场景化数据分析实战》课程操作手册

配套六月王炸课程的全套落地指南,手把手教你复现实战场景!

二、实战技巧分享

高效处理资源集成难题》→ 从基础出发,深入探究集成的秘密

三、开发技能突破

第三方系统调用Smartbi接口》→讲解系统集成时的jar包获取,以及集成时代码调用的基本流程。

集成接口介绍》→梳理Smartbi目前提供的接口,以及不同接口的调用流程。

AI每日一学

DeepSeek-R1-0528模型升级:推理与生态的双重升级》→ 解析模型性能提升40%的关键技术 (技术前沿)

简单总结一下机器学习中的几种常见的学习方式与区别》→ 监督/无监督/强化学习差异与应用场景图解 (基础重构)

五、资源更新

CAS单点登录 V2版》上线→ 接入到 CAS 平台中,并实现单点登录

组织/用户/角色信息管理API接口》上线→ 一套 HTTP API的组织、用户、角色信息管理接口

竹云统一身份认证平台组织用户同步对接》上线→ Smartbi封装对应的服务接口,给竹云的统一身份认证平台实时调用,完成组织、用户和角色信息的实时同步。

交互式仪表盘支持自定义字体》优化→ 修复了文本组件编辑状态不生效的问题

只允许外网某种移动端APP访问》优化→ 针对V11版本,增加了钉钉、企业微信访问限制功能

AD域(LDAP/LDAPS)登录验证》优化→ 修复了“更新白名单状态之前没有判断判断用户是否存”的问题

元数据分析落地到知识库》优化→ 增加获取资源创建者的逻辑判断,对空值空对象等情况做优化

麦粉社区
>
帖子详情

LEN函数怎么使用呢

数据分析 发表于 2023-4-23 11:56
发表于 2023-4-23 11:56:42

LEN([字段名])这个写法没问题吧,使用报错了


日志:


2023-04-23 11:51:02.152 [993] INFO launcher.DefaultLauncher.run:59 - Task start. (id:76fac8d9c16e6859656a643cd0db203e,nameERIVE_COLUMN)
2023-04-23 11:51:02.163 [993] INFO repository.NodeStatusRepository.executeUpdate:138 - Report status successful.(state:RUNNING)
2023-04-23 11:51:02.170 [993] WARN sql.SparkSession$Builder.logWarning:69 - Using an existing SparkSession; some spark core configurations may not take effect.
2023-04-23 11:51:02.170 [993] INFO node.GenericNode.start:107 - Node start. (id:76fac8d9c16e6859656a643cd0db203e,nameERIVE_COLUMN)
2023-04-23 11:51:02.203 [993] INFO datasources.InMemoryFileIndex.logInfo:57 - It took 3 ms to list leaf files for 1 paths.
2023-04-23 11:51:02.283 [993] INFO spark.SparkContext.logInfo:57 - Starting job: parquet at DatasetEvent.java:229
2023-04-23 11:51:02.325 [993] INFO scheduler.DAGScheduler.logInfo:57 - Job 77 finished: parquet at DatasetEvent.java:229, took 0.039619 s
2023-04-23 11:51:02.332 [993] INFO util.EventSerializeUtil.deserialize:106 - Deserialization event finished,took 0.162 s
2023-04-23 11:51:02.432 [993] ERROR node.GenericNode.handleExecuteError:148 - Node execution failed.(id:76fac8d9c16e6859656a643cd0db203e,nameERIVE_COLUMN)
org.apache.spark.sql.AnalysisException: Undefined function: 'len'. This function is neither a registered temporary function nor a permanent function registered in the database 'default'.; line 1 pos 11
        at org.apache.spark.sql.catalyst.analysis.Analyzer$LookupFunctions$$anonfun$apply$16.$anonfun$applyOrElse$121(Analyzer.scala:2108) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.analysis.package$.withPosition(package.scala:53) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.analysis.Analyzer$LookupFunctions$$anonfun$apply$16.applyOrElse(Analyzer.scala:2108) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.analysis.Analyzer$LookupFunctions$$anonfun$apply$16.applyOrElse(Analyzer.scala:2099) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDown$1(TreeNode.scala:318) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:74) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:318) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDown$3(TreeNode.scala:323) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$mapChildren$1(TreeNode.scala:408) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:244) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:406) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:359) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:323) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$transformExpressionsDown$1(QueryPlan.scala:94) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$mapExpressions$1(QueryPlan.scala:116) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:74) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpression$1(QueryPlan.scala:116) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.QueryPlan.recursiveTransform$1(QueryPlan.scala:127) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$mapExpressions$3(QueryPlan.scala:132) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:238) ~[scala-library-2.12.10.jar:?]
        at scala.collection.immutable.List.foreach(List.scala:392) ~[scala-library-2.12.10.jar:?]
        at scala.collection.TraversableLike.map(TraversableLike.scala:238) ~[scala-library-2.12.10.jar:?]
        at scala.collection.TraversableLike.map$(TraversableLike.scala:231) ~[scala-library-2.12.10.jar:?]
        at scala.collection.immutable.List.map(List.scala:298) ~[scala-library-2.12.10.jar:?]
        at org.apache.spark.sql.catalyst.plans.QueryPlan.recursiveTransform$1(QueryPlan.scala:132) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$mapExpressions$4(QueryPlan.scala:137) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:244) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.QueryPlan.mapExpressions(QueryPlan.scala:137) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpressionsDown(QueryPlan.scala:94) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpressions(QueryPlan.scala:85) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveExpressions$1.applyOrElse(AnalysisHelper.scala:153) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveExpressions$1.applyOrElse(AnalysisHelper.scala:152) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsDown$2(AnalysisHelper.scala:110) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:74) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsDown$1(AnalysisHelper.scala:110) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:223) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsDown(AnalysisHelper.scala:108) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsDown$(AnalysisHelper.scala:106) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsDown(LogicalPlan.scala:29) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperators(AnalysisHelper.scala:73) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperators$(AnalysisHelper.scala:72) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:29) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveExpressions(AnalysisHelper.scala:152) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveExpressions$(AnalysisHelper.scala:151) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveExpressions(LogicalPlan.scala:29) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.analysis.Analyzer$LookupFunctions$.apply(Analyzer.scala:2099) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.analysis.Analyzer$LookupFunctions$.apply(Analyzer.scala:2096) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$2(RuleExecutor.scala:216) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at scala.collection.IndexedSeqOptimized.foldLeft(IndexedSeqOptimized.scala:60) ~[scala-library-2.12.10.jar:?]
        at scala.collection.IndexedSeqOptimized.foldLeft$(IndexedSeqOptimized.scala:68) ~[scala-library-2.12.10.jar:?]
        at scala.collection.mutable.WrappedArray.foldLeft(WrappedArray.scala:38) ~[scala-library-2.12.10.jar:?]
        at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1(RuleExecutor.scala:213) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1$adapted(RuleExecutor.scala:205) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at scala.collection.immutable.List.foreach(List.scala:392) ~[scala-library-2.12.10.jar:?]
        at org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:205) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.analysis.Analyzer.org$apache$spark$sql$catalyst$analysis$Analyzer$$executeSameContext(Analyzer.scala:198) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:192) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:155) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$executeAndTrack$1(RuleExecutor.scala:183) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:88) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.rules.RuleExecutor.executeAndTrack(RuleExecutor.scala:183) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$executeAndCheck$1(Analyzer.scala:176) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:230) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:175) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.execution.QueryExecution.$anonfun$analyzed$1(QueryExecution.scala:73) ~[spark-sql_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:111) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:143) ~[spark-sql_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775) ~[spark-sql_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:143) ~[spark-sql_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:73) ~[spark-sql_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:71) ~[spark-sql_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:63) ~[spark-sql_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:98) ~[spark-sql_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775) ~[spark-sql_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:96) ~[spark-sql_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:618) ~[spark-sql_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775) ~[spark-sql_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:613) ~[spark-sql_2.12-3.1.3.jar:3.1.3]
        at smartbix.datamining.engine.execute.node.preprocess.DeriveColumnNode.execute(DeriveColumnNode.java:67) ~[EngineCommonNode-1.0.jar:?]
        at smartbix.datamining.engine.execute.node.GenericNode.start(GenericNode.java:117) ~[EngineCore-1.0.jar:?]
        at smartbix.datamining.engine.agent.execute.executor.DefaultNodeExecutor.execute(DefaultNodeExecutor.java:43) ~[EngineAgent-1.0.jar:?]
        at smartbix.datamining.engine.agent.execute.launcher.DefaultLauncher.run(DefaultLauncher.java:67) ~[EngineAgent-1.0.jar:?]
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_202-ea]
        at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_202-ea]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_202-ea]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_202-ea]
        at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_202-ea]
2023-04-23 11:51:02.449 [993] INFO repository.NodeStatusRepository.executeUpdate:138 - Report status successful.(state:FAIL)
发表于 2023-4-23 13:37:55
当前什么使用场景下用到了len函数呢?看报错是数据库不支持len函数的使用

回复

使用道具 举报

高级模式
B Color Image Link Quote Code Smilies
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

5回帖数 0关注人数 525浏览人数
最后回复于:2023-4-23 13:37
快速回复 返回顶部 返回列表